Inpainting error when use openvino script but unoccurred when unused #13955
nguynphungvu34
started this conversation in
General
Replies: 3 comments
-
i also have the same problem |
Beta Was this translation helpful? Give feedback.
0 replies
-
same problem, does anyone have solution? |
Beta Was this translation helpful? Give feedback.
0 replies
-
same problem ,only in sdxl model |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
When i try to inpaint, i got this error :
venv "E:\Stable_Diffussion\stable-diffusion-webui\venv\Scripts\Python.exe"
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half --skip-prepare-environment
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
2023-11-12 08:36:50,808 - ControlNet - INFO - ControlNet v1.1.416
ControlNet preprocessor location: E:\Stable_Diffussion\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-11-12 08:36:51,013 - ControlNet - INFO - ControlNet v1.1.416
Loading weights [3e5ba578d8] from E:\Stable_Diffussion\stable-diffusion-webui\models\Stable-diffusion\hassakuModel_v1to1.3Inpainting.safetensors
Creating model from config: E:\Stable_Diffussion\stable-diffusion-webui\configs\v1-inpainting-inference.yaml
fatal: No names found, cannot describe anything.
Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
.Startup time: 11.6s (import torch: 4.3s, import gradio: 1.0s, setup paths: 0.9s, initialize shared: 0.2s, other imports: 0.7s, setup codeformer: 0.3s, load scripts: 2.4s, create ui: 1.3s, gradio launch: 0.5s).
Applying attention optimization: InvokeAI... done.
Model loaded in 7.6s (load weights from disk: 1.3s, create model: 0.8s, apply weights to model: 2.6s, apply float(): 2.3s, calculate empty prompt: 0.4s).
{'Mask blur': 4}
Loading weights [3e5ba578d8] from E:\Stable_Diffussion\stable-diffusion-webui\models\Stable-diffusion\hassakuModel_v1to1.3Inpainting.safetensors
OpenVINO Script: created model from config : E:\Stable_Diffussion\stable-diffusion-webui\configs\v1-inpainting-inference.yaml
E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(
0%| | 0/15 [00:00<?, ?it/s][2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING] WON'T CONVERT forward E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py line 739
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING] due to:
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING] Traceback (most recent call last):
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING] File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch_subclasses\fake_tensor.py", line 677, in conv
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING] conv_backend = torch._C._select_conv_backend(**kwargs)
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING] torch._dynamo.exc.TorchRuntimeError: Failed running call_module L__self___conv_in((FakeTensor(..., size=(2, 4, 64, 64)),), **{}):
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING] Given groups=1, weight of size [320, 9, 3, 3], expected input[2, 4, 64, 64] to have 9 channels, but got 4 channels instead
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING]
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING] from user code:
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING] File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 939, in forward
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING] sample = self.conv_in(sample)
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING]
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING] Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING]
[2023-11-12 08:38:37,087] torch._dynamo.convert_frame: [WARNING]
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] WON'T CONVERT network_Conv2d_forward E:\Stable_Diffussion\stable-diffusion-webui\extensions-builtin\Lora\networks.py line 438
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] due to:
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] Traceback (most recent call last):
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch_subclasses\fake_tensor.py", line 677, in conv
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] conv_backend = torch._C._select_conv_backend(**kwargs)
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in method conv2d of type object at 0x00007FFD86BFF2E0>((FakeTensor(..., size=(2, 4, 64, 64)), Parameter(FakeTensor(..., size=(320, 9, 3, 3), requires_grad=True)), Parameter(FakeTensor(..., size=(320,), requires_grad=True)), (1, 1), (1, 1), (1, 1), 1), *{}):
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] Given groups=1, weight of size [320, 9, 3, 3], expected input[2, 4, 64, 64] to have 9 channels, but got 4 channels instead
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING]
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] from user code:
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] File "E:\Stable_Diffussion\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 444, in network_Conv2d_forward
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] return originals.Conv2d_forward(self, input)
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] return self._conv_forward(input, self.weight, self.bias)
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] return F.conv2d(input, weight, bias, self.stride,
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING]
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING] Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING]
[2023-11-12 08:38:39,764] torch._dynamo.convert_frame: [WARNING]
0%| | 0/15 [00:04<?, ?it/s]
*** Error completing request
*** Arguments: ('task(cl4u95vbyb622gf)', 4, '', 'underwear', [], None, None, None, None, None, <PIL.Image.Image image mode=RGB size=2894x4602 at 0x1FEC60F92A0>, <PIL.Image.Image image mode=RGBA size=2894x4602 at 0x1FEC60F95A0>, 20, 'Euler a', 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0, 512, 512, 1, 0, 1, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x000001FEC469BC10>, 3, False, '', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FEC4699EA0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FEC469AD10>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FEC469AF80>, '
CFG Scale
should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', 'None', 'None', 'GPU', True, 'Euler a', False, False, 'None', 0.8, 'Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8
', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', 'Will upscale the image by the selected scale factor; use width and height sliders to set tile size
', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}Traceback (most recent call last):
File "E:\Stable_Diffussion\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "E:\Stable_Diffussion\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "E:\Stable_Diffussion\stable-diffusion-webui\modules\img2img.py", line 206, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "E:\Stable_Diffussion\stable-diffusion-webui\modules\scripts.py", line 601, in run
processed = script.run(p, *script_args)
File "E:\Stable_Diffussion\stable-diffusion-webui\scripts\openvino_accelerate.py", line 1224, in run
processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac)
File "E:\Stable_Diffussion\stable-diffusion-webui\scripts\openvino_accelerate.py", line 968, in process_images_openvino
output = shared.sd_diffusers_model(
File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_inpaint.py", line 985, in call
noise_pred = self.unet(
File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch_dynamo\eval_frame.py", line 328, in _fn
return fn(*args, **kwargs)
File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 939, in forward
sample = self.conv_in(sample)
File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Stable_Diffussion\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 444, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "E:\Stable_Diffussion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [320, 9, 3, 3], expected input[2, 4, 64, 64] to have 9 channels, but got 4 channels instead
i detected this error will not occurr if i dont use openvino script.
i efforted to use inpaint upload as alternate but this still got.
Whether this is a bug of openvino with inpaint progress ?
Beta Was this translation helpful? Give feedback.
All reactions