Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Inpaint Failed with SDXL Inpainting Checkpoint #15973

Closed
4 of 6 tasks
jordenyt opened this issue Jun 8, 2024 · 5 comments
Closed
4 of 6 tasks

[Bug]: Inpaint Failed with SDXL Inpainting Checkpoint #15973

jordenyt opened this issue Jun 8, 2024 · 5 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@jordenyt
Copy link
Contributor

jordenyt commented Jun 8, 2024

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

After checking out the dev branch, when doing inpainting after select a SDXL Inpainting Model (https://civitai.com/models/403751/dreamshaper-xl-lightning-inpainting), the program throws error.

Steps to reproduce the problem

  1. go to Inpaint tab
  2. select a SDXL Inpainting checkpoint
  3. Select proper scheduler / sampler / cfg scale / steps
  4. Import a picture and create the mask.
  5. Press Generate.
  6. Throws error:

RuntimeError: Given groups=1, weight of size [320, 9, 3, 3], expected input[2, 4, 160, 120] to have 9 channels, but got 4 channels instead.

What should have happened?

Should do the inpainting properly. This is working in "master" branch.

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

sysinfo-2024-06-08-18-15.json

Console logs

Loading model sdxlTurboInpaint\dreamshaperXL_lightningInpaint.safetensors [1a49cd4473] (3 out of 5)09<00:00,  1.09it/s]
Loading weights [1a49cd4473] from E:\Workspace\stable-diffusion-webui\models\Stable-diffusion\sdxlTurboInpaint\dreamshaperXL_lightningInpaint.safetensors
Creating model from config: E:\Workspace\stable-diffusion-webui\configs\sd_xl_inpaint.yaml
Applying attention optimization: xformers... done.
Model loaded in 15.9s (load weights from disk: 0.5s, create model: 0.5s, apply weights to model: 14.1s, calculate empty prompt: 0.6s).
  0%|                                                                                            | 0/7 [00:01<?, ?it/s]
*** Error completing request
*** Arguments: ('task(nxm24potrmwi3k5)', <gradio.routes.Request object at 0x00000125ABB17DF0>, 2, 'Young slim Korean girl, long brown hair, sitting in cafe, collarbone, cleavage, big breast', 'fat', [], None, None, {'image': <PIL.Image.Image image mode=RGBA size=960x1280 at 0x125A9190070>, 'mask': <PIL.Image.Image image mode=RGB size=960x1280 at 0x125A91A9CF0>}, None, None, None, None, 4, 0, 2, 1, 1, 2, 1.5, 0.75, 0.0, 1280, 960, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', 'upload', None, 0, 8, 'DPM++ SDE', 'Automatic', False, 1, 0.5, 4, 0, 0.5, 2, -1, False, -1, 0, 0, 0, False, '', 0.8, ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=64, threshold_a=64.0, threshold_b=64.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=True, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=64, threshold_a=64.0, threshold_b=64.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=True, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=64, threshold_a=64.0, threshold_b=64.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=True, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "E:\Workspace\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "E:\Workspace\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\modules\img2img.py", line 242, in img2img
        processed = process_images(p)
      File "E:\Workspace\stable-diffusion-webui\modules\processing.py", line 843, in process_images
        res = process_images_inner(p)
      File "E:\Workspace\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\modules\processing.py", line 980, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "E:\Workspace\stable-diffusion-webui\modules\processing.py", line 1740, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "E:\Workspace\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 172, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\Workspace\stable-diffusion-webui\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "E:\Workspace\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 172, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 553, in sample_dpmpp_sde
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 244, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "E:\Workspace\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\modules\sd_models_xl.py", line 43, in apply_model
        return self.model(x, t, cond)
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1568, in _call_impl
        result = forward_call(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\modules\sd_hijack_utils.py", line 22, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "E:\Workspace\stable-diffusion-webui\modules\sd_hijack_utils.py", line 34, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\modules\sd_hijack_unet.py", line 50, in apply_model
        result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
        return self.diffusion_model(
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1568, in _call_impl
        result = forward_call(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 993, in forward
        h = module(h, emb, context)
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 102, in forward
        x = layer(x)
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\Workspace\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 542, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "E:\Workspace\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
    RuntimeError: Given groups=1, weight of size [320, 9, 3, 3], expected input[2, 4, 160, 120] to have 9 channels, but got 4 channels instead

Additional information

This issue also happened after I merge the PR of @huchenlei alone to my local branch.

@jordenyt jordenyt added the bug-report Report of a bug, yet to be confirmed label Jun 8, 2024
@willhsmit
Copy link

Confirmed this also happens with the original stable-diffusion-xl-1.0-inpainting-0.1 on the dev branch, it's not specific to the dreamshaper/lightning model.

@huchenlei
Copy link
Contributor

Ack. Link of the culprit PR: #15806.

@huchenlei huchenlei mentioned this issue Jun 9, 2024
4 tasks
@huchenlei
Copy link
Contributor

@jordenyt Can you help verify if #15976 works for you?

@ThereforeGames
Copy link
Contributor

@huchenlei I encountered the same error and can confirm that #15976 fixed it. Thank you!

@jordenyt
Copy link
Contributor Author

jordenyt commented Jun 9, 2024

@jordenyt Can you help verify if #15976 works for you?

Confirmed it is working. You are amazing!

@jordenyt jordenyt closed this as completed Jun 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

4 participants