

This is like generating multiple images but only in a particular area. You can reuse the original prompt for fixing defects. This is the area you want Stable Diffusion to regenerate the image. Use the paintbrush tool to create a mask. We will inpaint both the right arm and the face at the same time.

Upload the image to the inpainting canvas. In AUTOMATIC1111 GUI, Select the img2img tab and select the Inpaint sub-tab. Select sd-v1-5-inpainting.ckpt to enable the model. In AUTOMATIC1111, press the refresh icon next to the checkpoint selection dropbox at the top left. To install the v1.5 inpainting model, download the model checkpoint file and put it in the folder stable-diffusion-webui/models/Stable-diffusion But usually, it’s OK to use the same model you generated the image with for inpainting. It’s a fine image but I would like to fix the following issuesĭo you know there is a Stable Diffusion model trained for inpainting? You can use it if you want to get the best result. , (long hair:0.5), headLeaf, wearing stola, vast roman palace, large window, medieval renaissance palace, ((large room)), 4k, arstation, intricate, elegant, highly detailed (Detailed settings can be found here.) Original image I will use an original image from the Lonely Palace prompt:
#Gimp define how to
In this section, I will show you step-by-step how to use inpainting to fix small defects. See my quick start guide for setting up in Google’s cloud server. We will use Stable Diffusion AI and AUTOMATIC1111 GUI.
