You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue is caused by an extension, but I believe it is caused by a bug in the webui
The issue exists in the current version of the webui
The issue has not been reported before recently
The issue has been reported before but has not been fixed yet
What happened?
The error occurs when I move the slide to close the lips in the Live Portrait extension, before I didn't get that error, but now I do.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-605-g05b01da0
Commit hash: 05b01da
Installing sd-webui-live-portrait requirement: changing imageio-ffmpeg version from None to 0.5.1
Installing sd-webui-live-portrait requirement: pykalman
Installing sd-webui-live-portrait requirement: onnxruntime-gpu==1.18 --extra-index-url "https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/"
Found existing installation: onnxruntime-gpu 1.18.0
Uninstalling onnxruntime-gpu-1.18.0:
Successfully uninstalled onnxruntime-gpu-1.18.0
Looking in indexes: https://pypi.org/simple, https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
Collecting onnxruntime-gpu==1.17.1
Downloading https://aiinfra.pkgs.visualstudio.com/2692857e-05ef-43b4-ba9c-ccf1c22c437c/_packaging/9387c3aa-d9ad-4513-968c-383f6f7f53b8/pypi/download/onnxruntime-gpu/1.17.1/onnxruntime_gpu-1.17.1-cp310-cp310-win_amd64.whl (149.1 MB)
-------------------------------------- 149.1/149.1 MB 2.4 MB/s eta 0:00:00
Requirement already satisfied: coloredlogs in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (15.0.1)
Requirement already satisfied: flatbuffers in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (24.3.25)
Requirement already satisfied: numpy>=1.21.6 in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (1.26.2)
Requirement already satisfied: packaging in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (23.2)
Requirement already satisfied: protobuf in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (3.20.0)
Requirement already satisfied: sympy in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (1.12)
Requirement already satisfied: humanfriendly>=9.1 in c:\ia(forge)\system\python\lib\site-packages (from coloredlogs->onnxruntime-gpu==1.17.1) (10.0)
Requirement already satisfied: mpmath>=0.19 in c:\ia(forge)\system\python\lib\site-packages (from sympy->onnxruntime-gpu==1.17.1) (1.3.0)
Requirement already satisfied: pyreadline3 in c:\ia(forge)\system\python\lib\site-packages (from humanfriendly>=9.1->coloredlogs->onnxruntime-gpu==1.17.1) (3.4.1)
Installing collected packages: onnxruntime-gpu
Successfully installed onnxruntime-gpu-1.17.1
CUDA 12.1
+---------------------------------+
--- PLEASE, RESTART the Server! ---
+---------------------------------+
Launching Web UI with arguments: --xformers --skip-torch-cuda-test --precision full --no-half --no-half-vae
Total VRAM 6144 MB, total RAM 15834 MB
pytorch version: 2.3.1+cu121
xformers version: 0.0.27
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 SUPER : native
VAE dtype preferences: [torch.float32] -> torch.float32
CUDA Using Stream: False
C:\ia(forge)\system\python\lib\site-packages\transformers\utils\hub.py:128: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead.
warnings.warn(
Using xformers cross attention
Using xformers attention for VAE
ControlNet preprocessor location: C:\ia(forge)\webui\models\ControlNetPreprocessor
[-] ADetailer initialized. version: 24.9.0, num models: 10
10:01:00 - ReActor - STATUS - Running v0.7.1-b2 on Device: CUDA
Loading additional modules ... done.
2024-11-15 10:01:24,738 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'C:\ia(forge)\webui\models\Stable-diffusion\realisticVisionV60B1_v51VAE-inpainting.safetensors', 'hash': 'b7aa5c67'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL: http://127.0.0.1:7860/
To create a public link, set share=True in launch().
Startup time: 217.9s (prepare environment: 165.2s, launcher: 0.7s, import torch: 12.4s, initialize shared: 0.3s, other imports: 0.6s, load scripts: 5.7s, initialize google blockly: 21.9s, create ui: 7.5s, gradio launch: 3.3s, app_started_callback: 0.1s).
Environment vars changed: {'stream': False, 'inference_memory': 4687.0, 'pin_shared_memory': False}
[GPU Setting] You will use 23.70% GPU memory (1456.00 MB) to load weights, and use 76.30% GPU memory (4687.00 MB) to do matrix computation.
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 83.33% GPU memory (5119.00 MB) to load weights, and use 16.67% GPU memory (1024.00 MB) to do matrix computation.
Environment vars changed: {'stream': False, 'inference_memory': 4687.0, 'pin_shared_memory': False}
[GPU Setting] You will use 23.70% GPU memory (1456.00 MB) to load weights, and use 76.30% GPU memory (4687.00 MB) to do matrix computation.
[10:01:52] Load appearance_feature_extractor from live_portrait_wrapper.py:46
C:\ia(forge)\webui\models\liveportrait\base_models\appearance_feature_extractor.s
afetensors done.
Load motion_extractor from live_portrait_wrapper.py:49
C:\ia(forge)\webui\models\liveportrait\base_models\motion_extractor.safetensors
done.
[10:01:53] Load warping_module from live_portrait_wrapper.py:52
C:\ia(forge)\webui\models\liveportrait\base_models\warping_module.safetensors
done.
[10:01:54] Load spade_generator from live_portrait_wrapper.py:55
C:\ia(forge)\webui\models\liveportrait\base_models\spade_generator.safetensors
done.
Load stitching_retargeting_module from live_portrait_wrapper.py:59
C:\ia(forge)\webui\models\liveportrait\retargeting_models\stitching_retargeting_m
odule.safetensors done.
Using InsightFace cropper live_portrait_pipeline.py:47
[10:01:58] FaceAnalysisDIY warmup time: 2.770s face_analysis_diy.py:79
[10:02:00] LandmarkRunner warmup time: 1.117s human_landmark_runner.py:95
Load source image from C:\Users\Usuario\AppData\Local\Temp\gradio\tmpzmbcg7mo.png. gradio_pipeline.py:421
[10:02:04] Calculating eyes-open and lip-open ratios successfully! gradio_pipeline.py:432
Traceback (most recent call last):
File "C:\ia(forge)\system\python\lib\site-packages\gradio\queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "C:\ia(forge)\system\python\lib\site-packages\gradio\route_utils.py", line 285, in call_process_api
output = await app.get_blocks().process_api(
File "C:\ia(forge)\system\python\lib\site-packages\gradio\blocks.py", line 1923, in process_api
result = await self.call_function(
File "C:\ia(forge)\system\python\lib\site-packages\gradio\blocks.py", line 1508, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "C:\ia(forge)\system\python\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\ia(forge)\system\python\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\ia(forge)\system\python\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\ia(forge)\system\python\lib\site-packages\gradio\utils.py", line 818, in wrapper
response = f(*args, **kwargs)
File "C:\ia(forge)\webui\extensions\sd-webui-live-portrait\scripts\main.py", line 183, in gpu_wrapped_execute_image_retargeting
out, out_to_ori_blend = pipeline.execute_image_retargeting(*args, **kwargs)
File "C:\ia(forge)\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\ia(forge)\webui\extensions\sd-webui-live-portrait\liveportrait\gradio_pipeline.py", line 310, in execute_image_retargeting
lip_variation_three = torch.tensor(lip_variation_three).to(device)
RuntimeError: Could not infer dtype of NoneType
Steps to reproduce the problem
.
What should have happened?
.
What browsers do you use to access the UI ?
No response
Sysinfo
.
Console logs
.
Additional information
No response
The text was updated successfully, but these errors were encountered:
Checklist
What happened?
The error occurs when I move the slide to close the lips in the Live Portrait extension, before I didn't get that error, but now I do.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-605-g05b01da0
Commit hash: 05b01da
Installing sd-webui-live-portrait requirement: changing imageio-ffmpeg version from None to 0.5.1
Installing sd-webui-live-portrait requirement: pykalman
Installing sd-webui-live-portrait requirement: onnxruntime-gpu==1.18 --extra-index-url "https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/"
Found existing installation: onnxruntime-gpu 1.18.0
Uninstalling onnxruntime-gpu-1.18.0:
Successfully uninstalled onnxruntime-gpu-1.18.0
Looking in indexes: https://pypi.org/simple, https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
Collecting onnxruntime-gpu==1.17.1
Downloading https://aiinfra.pkgs.visualstudio.com/2692857e-05ef-43b4-ba9c-ccf1c22c437c/_packaging/9387c3aa-d9ad-4513-968c-383f6f7f53b8/pypi/download/onnxruntime-gpu/1.17.1/onnxruntime_gpu-1.17.1-cp310-cp310-win_amd64.whl (149.1 MB)
-------------------------------------- 149.1/149.1 MB 2.4 MB/s eta 0:00:00
Requirement already satisfied: coloredlogs in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (15.0.1)
Requirement already satisfied: flatbuffers in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (24.3.25)
Requirement already satisfied: numpy>=1.21.6 in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (1.26.2)
Requirement already satisfied: packaging in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (23.2)
Requirement already satisfied: protobuf in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (3.20.0)
Requirement already satisfied: sympy in c:\ia(forge)\system\python\lib\site-packages (from onnxruntime-gpu==1.17.1) (1.12)
Requirement already satisfied: humanfriendly>=9.1 in c:\ia(forge)\system\python\lib\site-packages (from coloredlogs->onnxruntime-gpu==1.17.1) (10.0)
Requirement already satisfied: mpmath>=0.19 in c:\ia(forge)\system\python\lib\site-packages (from sympy->onnxruntime-gpu==1.17.1) (1.3.0)
Requirement already satisfied: pyreadline3 in c:\ia(forge)\system\python\lib\site-packages (from humanfriendly>=9.1->coloredlogs->onnxruntime-gpu==1.17.1) (3.4.1)
Installing collected packages: onnxruntime-gpu
Successfully installed onnxruntime-gpu-1.17.1
CUDA 12.1
Launching Web UI with arguments: --xformers --skip-torch-cuda-test --precision full --no-half --no-half-vae
Total VRAM 6144 MB, total RAM 15834 MB
pytorch version: 2.3.1+cu121
xformers version: 0.0.27
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 SUPER : native
VAE dtype preferences: [torch.float32] -> torch.float32
CUDA Using Stream: False
C:\ia(forge)\system\python\lib\site-packages\transformers\utils\hub.py:128: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead.
warnings.warn(
Using xformers cross attention
Using xformers attention for VAE
ControlNet preprocessor location: C:\ia(forge)\webui\models\ControlNetPreprocessor
[-] ADetailer initialized. version: 24.9.0, num models: 10
10:01:00 - ReActor - STATUS - Running v0.7.1-b2 on Device: CUDA
Loading additional modules ... done.
2024-11-15 10:01:24,738 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'C:\ia(forge)\webui\models\Stable-diffusion\realisticVisionV60B1_v51VAE-inpainting.safetensors', 'hash': 'b7aa5c67'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL: http://127.0.0.1:7860/
To create a public link, set share=True in launch().
Startup time: 217.9s (prepare environment: 165.2s, launcher: 0.7s, import torch: 12.4s, initialize shared: 0.3s, other imports: 0.6s, load scripts: 5.7s, initialize google blockly: 21.9s, create ui: 7.5s, gradio launch: 3.3s, app_started_callback: 0.1s).
Environment vars changed: {'stream': False, 'inference_memory': 4687.0, 'pin_shared_memory': False}
[GPU Setting] You will use 23.70% GPU memory (1456.00 MB) to load weights, and use 76.30% GPU memory (4687.00 MB) to do matrix computation.
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 83.33% GPU memory (5119.00 MB) to load weights, and use 16.67% GPU memory (1024.00 MB) to do matrix computation.
Environment vars changed: {'stream': False, 'inference_memory': 4687.0, 'pin_shared_memory': False}
[GPU Setting] You will use 23.70% GPU memory (1456.00 MB) to load weights, and use 76.30% GPU memory (4687.00 MB) to do matrix computation.
[10:01:52] Load appearance_feature_extractor from live_portrait_wrapper.py:46
C:\ia(forge)\webui\models\liveportrait\base_models\appearance_feature_extractor.s
afetensors done.
Load motion_extractor from live_portrait_wrapper.py:49
C:\ia(forge)\webui\models\liveportrait\base_models\motion_extractor.safetensors
done.
[10:01:53] Load warping_module from live_portrait_wrapper.py:52
C:\ia(forge)\webui\models\liveportrait\base_models\warping_module.safetensors
done.
[10:01:54] Load spade_generator from live_portrait_wrapper.py:55
C:\ia(forge)\webui\models\liveportrait\base_models\spade_generator.safetensors
done.
Load stitching_retargeting_module from live_portrait_wrapper.py:59
C:\ia(forge)\webui\models\liveportrait\retargeting_models\stitching_retargeting_m
odule.safetensors done.
Using InsightFace cropper live_portrait_pipeline.py:47
[10:01:58] FaceAnalysisDIY warmup time: 2.770s face_analysis_diy.py:79
[10:02:00] LandmarkRunner warmup time: 1.117s human_landmark_runner.py:95
Load source image from C:\Users\Usuario\AppData\Local\Temp\gradio\tmpzmbcg7mo.png. gradio_pipeline.py:421
[10:02:04] Calculating eyes-open and lip-open ratios successfully! gradio_pipeline.py:432
Traceback (most recent call last):
File "C:\ia(forge)\system\python\lib\site-packages\gradio\queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "C:\ia(forge)\system\python\lib\site-packages\gradio\route_utils.py", line 285, in call_process_api
output = await app.get_blocks().process_api(
File "C:\ia(forge)\system\python\lib\site-packages\gradio\blocks.py", line 1923, in process_api
result = await self.call_function(
File "C:\ia(forge)\system\python\lib\site-packages\gradio\blocks.py", line 1508, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "C:\ia(forge)\system\python\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\ia(forge)\system\python\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\ia(forge)\system\python\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\ia(forge)\system\python\lib\site-packages\gradio\utils.py", line 818, in wrapper
response = f(*args, **kwargs)
File "C:\ia(forge)\webui\extensions\sd-webui-live-portrait\scripts\main.py", line 183, in gpu_wrapped_execute_image_retargeting
out, out_to_ori_blend = pipeline.execute_image_retargeting(*args, **kwargs)
File "C:\ia(forge)\system\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\ia(forge)\webui\extensions\sd-webui-live-portrait\liveportrait\gradio_pipeline.py", line 310, in execute_image_retargeting
lip_variation_three = torch.tensor(lip_variation_three).to(device)
RuntimeError: Could not infer dtype of NoneType
Steps to reproduce the problem
.
What should have happened?
.
What browsers do you use to access the UI ?
No response
Sysinfo
.
Console logs
.
Additional information
No response
The text was updated successfully, but these errors were encountered: