Stable Diffusion Webui ControlNet

ControlNet is a group of neural networks that can control the artistic and structural aspects of image generation. The popular ControlNet models include canny, depth, openpose, etc. It gives you more controls over images created by Stable Diffusion. This post provides use cases and step-by-step guide on how to use ControlNet in Stable Diffusion Webui.

Table of Content

  1. Setup ControlNet, Test with Canny
  2. Sketch to Image with Scribble
  3. Pose Control with OpenPose
  4. Create Backdrops with Depth
  5. Apply Styles with IP-Adapter
  6. Outpainting with Inpaint
  7. Upscale with Tile
  8. Figure animations with TemporalNet
  9. FAQ


1. Setup ControlNet, Test with Canny

If you don’t have stable diffusion webui installed, go to install stable diffusion webui on Windows.

Now you can install ControlNet and test with Canny.

1. Run “stable-diffusion-webui/webui-user.bat” to open Stable Diffusion Webui in a browser at http://127.0.0.1:7860/.
2. Click the Extensions tab at the top. Search for ControlNet. Select sd-webui-controlnet and click Install.
3. Go to sd_controlnet_collection to download controlnet models, such as “t2i-adapter_xl_canny.safetensors.” Put it at “stable-diffusion-webui\extensions\sd-webui-controlnet\models” directory.
4. Close your Webui console and browser. Rerun Webui. You will see a new ControlNet section at the left bottom area of the screen. Now we test it.
5. Select a Stable Diffusion checkpoint from the dropdown. In txt2img tab, enter your prompt. Expand ControlNet section, drag your reference image inside the ControlNet first single image. Check “Enable.”
6. In Control Type, check “Canny”, Preprocessor fills with “Canny”. In Model, select “t2i-adapter_xl_canny.safetensors.”
7. Click Generate button, you will see the generated image is impacted by the input image in ControlNet.

Install ControlNet


2. Sketch to Image with Scribble

1. Go to ControlNet -v1-1 to download “control_v11p_sd15_scribble.pth” and put it in the directory “extensions\sd-webui-controlnet\models.”
2. Draw a simple sketch for your image.
3. Run “stable-diffusion-webui/webui-user.bat” to open Stable Diffusion Webui in a browser at http://127.0.0.1:7860/. Select a Stable Diffusion checkpoint in the drop down.
4. In txt2img tab, enter your prompt to describe your image. Most settings can be default. If the checkpoint is sdxl, increase Width and Height to 768 or above. The ratio is the same as the input sketch.
5. Expand ControlNet section. Drag your drawing to the first Single Image. Check “Enable”.
6. In Control Type, select “Scribble.” Preprocessor will fill with “scribble_pidinet”, In Model dropdown, select “control_v11p_sd15_scribble.”
7. Click Generate button.

Tip: If the generated image don’t follow the input sketch, you can try these two things: (1) Check the console to see if there are errors due to conflicts. If there are, change to another checkpoint and try again. (2) Close and restart Webui to clear cached data.

Sketch to image


3. Pose control with OpenPose

1. Go to ControlNet -v1-1 to download “control_v11p_sd15_openpose.pth” and put it in the directory “extensions\sd-webui-controlnet\models.”
2. Run “webui-user.bat” to open Stable Diffusion Webui.
3. In txt2img tab, enter your prompt. The Width and Height keep the same ratio as the input image.
4. Expand ControlNet area. Load an image with the pose you want. Check “Enabled” and “Pixel Perfect” underneath.
5. Select Control Type to be “OpenPose”. Set Preprocessor to be “openpose_full”. Set Model to be “control_V11p_se15_openpose”.
6. Click the fire icon next to Preprocessor, it will generate an OpenPose image in preview. You can download it for later use.
7. Click Generate button.

openpose


4. Create Backdrops with Depth

1. Go to ControlNet -v1-1 to download “control_v11f1p_sd15_depth.pth” and put it in the directory “extensions\sd-webui-controlnet\models.”
2. On internet, search for an image that has similar depth to the image you want to generate.
3. Run “stable-diffusion-webui/webui-user.bat” to open Stable Diffusion Webui in a browser at http://127.0.0.1:7860/.
4. Select a Stable Diffusion checkpoint in the drop down. In txt2img tab, enter your prompt to describe your image. Set the Width and Height around 768 ~1024 if you are using a sdxl checkpoint. Set the image ratio the same an the input image. Other settings can be default.
5. Expand ControlNet section. Drag your drawing to the first Single Image. Check “Enable”.
6. In ControlType, select “Depth.” The Preprocessor will fill with “depth_midas”, In the Model dropdown, select “control_v11f1p_sd15_depth.”
7. Click Generate button.

Install ControlNet


5. Apply Styles with IP-Apdapter

1. Go to IP-Adapter to download models, such as “ip-adpater-sd15.bin” or “ip-adapter-sd15.safetensors” and put them in the directory “extensions\sd-webui-controlnet\models.”
2 Run “stable-diffusion-webui/webui-user.bat” to open Stable Diffusion Webui in a browser at http://127.0.0.1:7860/.
3 Select a Stable Diffusion checkpoint in the drop down. In txt2img tab, enter your prompt to describe your image. Set the Width and Height around 768 ~1024 if you are using a sdxl checkpoint. Other settings can be default.
4. Expand ControlNet section. Drag your drawing to the first Single Image. Check “Enable”.
5. In the ControlType, select “IP-Adapter.” In the Preprocessor, select “ip-adapter_clip_h”, In the Model dropdown, select “ip-adapter_sd15.”
6. Click Generate button.

t2i sytle controlnet


6. Outpainting with Inpaint

1. Go to ControlNet -v1-1 to download “control_v11p_sd15_inpaint.pth” and put it in the directory “extensions\sd-webui-controlnet\models.”
2. Run “webui-user.bat” to open Stable Diffusion Webui in a browser at http://127.0.0.1:7860/.
3. Select a checkpoint. In txt2img tab, enter your prompt to describe the image you want. Set Batch count to be 4.
4 In ControlNet section, drag your image in. Check “Enable” underneath. Select Control Type to “Inpaint”. This will change Preprocessor to “inpaint_only” and Model to “control_v11p_sd15_inpaint”.
5. (Important) Find the width and height of the original image. Increase either width or height (not both) at a time. In ControlNet area, check Resize Mode to “resize and fill”.
6. Click Generation button.
7. After images are generated, drag a good output image to your ControlNet as a new baseline to continue. Increase Width or Height and click Generate button.
8. Repeat the process until you have your ideal image.

outpainting controlnet


7. Upscale with Tile

1. Open Stable Diffusion Webui, click Extensions tab, and click Load From. Search for Ultimate SD upscale, and click Install.
2. Go to ControlNet-v1-1 to download tile model, i.e “controlnet_v11f1e_sd15_tile.pth.” Put it in “extensions\sd-webui-controlnet\models” folder.
3. Go to mega site to download “4x-UltraShap.pth.” Put it in “models\ESRGAN” folder.
4. Restart Webui. In img2img tab, load an image you want to upscale, write a prompt to describe the image. Decrease denoising strength to around 0.1.
5. Expand ControlNet area. Check “Enable”. Select “Tile” from Control Type. In Preprocessor, select “tile_resample. ” In Model, select “control_v11f1e_sd15_tile”.
6. In Script dropdown, select “Ultimate SD upscale”. For Target size type, select “Scale from image size”. Change Scale to 2 ~ 6.
7. In Upscaler, check “4x-UltraShap”.
8. Click Generate button. It takes minutes depending on how big you want to upscale.
9. When you zoom in the outcome image, you may see edges of tiling. You can fix with Photoshop blend brush.

upscale controlnet


8. Figure animations with TemporalNet

1. Go to TemporalNet to download “diff_control_sd15_temporalnet_fp16.safetensors” and “cldm_v15.yaml.” Rename “cldm_v15.yaml” to “diff_control_sd15_temporalnet_fp16.yaml.” Put two files at “extensions\sd-webui-controlnet\models” directory. TemporalNet is one of ControlNet models that enhances the consistency of generated images and reduce flickering.
2. Prepare a video witch a character is dancing or actioning. If the video doesn’t have a green screen as background, remove the background with software tools and fill background with a green screen. Render it as an image sequence and put them in one directory.
3. Run “webui-user.bat” to open Stable Diffusion Webui. Click Settings tab. Select “User Interface” from the left menu. At [Info] Quicksettings list, add “initial_noise_multiplier.” Select “ControlNet” from the left menu, check “Do not append detectmap to output.” Click Apply settings and Reload UI buttons.
4. After you reload the webui, select a checkpoint, such as “revAnimated.” In Noise multiplier for img2img, change the value to “0.7” .
5. Click img2img tab, load the first image in the image sequence inside the area. Enter your prompt to describe the video. Change the Width and Height to the size of your image. Set CFG Scale to be 7. Set Denoising strength to be 0.75~0.9 .
6. Expand ControlNet section. In ControlNet Unit 0, check “Enable”. In Control Type, select “OpenPose”. Set Preprocessor to be “dw_openpose_full”. Set Model to be “control_v11p_sd15_openpose.” Check “My prompt is more important. ” This will improve the pose and fingers of the character.
7. In ControlNet Unit 1, check “Enable”. In Control Type, check “All”. Set the Preprocessor to be “None”, Set the Model to be “diff_control_sd15_temporalnet_fp16”. Check “My prompt is more important.”
8. Click the Generate button to generate one image. Adjust your prompt, noise multiplier, CFG scale and Denoising strength until you are happy with the rendered image. Copy the seed number from the output image to replace “-1” under the Seed field.
9. Under the Generation tab, click the “Batch” tab. Set the Input directory to the path of your image sequence. In the Output directory, set the path to where you want to save the output.
10. Click the Generate button. Monitor the progress in the Anaconda prompt. When it finishes, go to the output directory to check the result.
11. You can continue tuning the prompt, the value of Noise multiplier, CFG Scale, and Denoising strength to suit your needs.
12. When you are happy with the result, import the output image sequence in After Effect or other video editing software to render as a video.

Tip: If the generated image sequence is flicking, you can try these two things: (1) Clean up your video with no objects at background or use greenscreen. (2) Add more detailed descriptions in your prompt to improve consistency of the area.

temporalnet


9. FAQ

What is ControlNet?

ControlNet is a neural network working with Stable Diffusion to control diffusion models by adding extra conditions. The detailed info can be found at Github ControlNet.

What are major functionalities ControlNet provides in Stable Diffusion Webui?

1. Sketch-to-Image with Scribble.
2. Pose control with OpenPose.
3. Create backdrops with Depth.
4. Apply styles with IP-Adapter.
5. Outpainting with Inpaint.
6. Upscale images with Tile.
7. Figure animations with TemporalNet.

Comments are closed