Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could not allocate tensor with XXXXXXXX bytes. There is not enough GPU video memory available! #6168

Open
A-1223-I opened this issue Dec 23, 2024 · 12 comments
Labels
User Support A user needs help with something, probably not a bug.

Comments

@A-1223-I
Copy link

Your question

hii the anaconda console tells me this:
Using directml with device:
Total VRAM 1024 MB, total RAM 32694 MB
pytorch version: 2.4.1+cpu
Set vram state to: NORMAL_VRAM

how can i set the vram to high? (mi GPU is 16GB Ram and the shared memory is using 1024MB) in the task manager it uses 16GB VRAM of Dedicated memory but shared it uses 1024GB, and it throws me that error sometimes, how can i fix it? (im not using webUI)

Logs

## ComfyUI-Manager: installing dependencies done.
[2024-12-22 18:10:31.968] ** ComfyUI startup time: 2024-12-22 18:10:31.968410
[2024-12-22 18:10:31.982] ** Platform: Windows
[2024-12-22 18:10:31.982] ** Python version: 3.10.12 | packaged by Anaconda, Inc. | (main, Jul  5 2023, 19:01:18) [MSC v.1916 64 bit (AMD64)]
[2024-12-22 18:10:31.983] ** Python executable: C:\Users\Administrator\anaconda3\envs\comfyui\python.exe
[2024-12-22 18:10:31.983] ** ComfyUI Path: C:\ComfyUI\ComfyUI
[2024-12-22 18:10:31.983] ** Log path: C:\ComfyUI\ComfyUI\comfyui.log

Prestartup times for custom nodes:
[2024-12-22 18:10:32.847]    1.9 seconds: C:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Manager
[2024-12-22 18:10:32.847] 
[2024-12-22 18:10:34.493] Using directml with device: 
[2024-12-22 18:10:34.496] Total VRAM 1024 MB, total RAM 32694 MB
[2024-12-22 18:10:34.496] pytorch version: 2.4.1+cpu
[2024-12-22 18:10:34.497] Set vram state to: HIGH_VRAM
[2024-12-22 18:10:34.497] Device: privateuseone
[2024-12-22 18:10:35.433] Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
[2024-12-22 18:10:37.439] [Prompt Server] web root: C:\ComfyUI\ComfyUI\web
[2024-12-22 18:10:38.124] ### Loading: ComfyUI-Manager (V2.55.5)
[2024-12-22 18:10:38.291] ### ComfyUI Version: v0.3.9-10-g57f330c | Released on '2024-12-22'
[2024-12-22 18:10:38.295] 
Import times for custom nodes:
[2024-12-22 18:10:38.295]    0.0 seconds: C:\ComfyUI\ComfyUI\custom_nodes\websocket_image_save.py
[2024-12-22 18:10:38.295]    0.2 seconds: C:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Manager
[2024-12-22 18:10:38.296] 
[2024-12-22 18:10:38.306] Starting server

[2024-12-22 18:10:38.306] To see the GUI go to: http://127.0.0.1:8188
[2024-12-22 18:10:38.477] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[2024-12-22 18:10:38.489] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[2024-12-22 18:10:38.536] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[2024-12-22 18:10:38.599] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[2024-12-22 18:10:38.627] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[2024-12-22 18:11:04.788] FETCH DATA from: C:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
[2024-12-22 18:11:11.279] got prompt
[2024-12-22 18:11:14.785] model weight dtype torch.float32, manual cast: None
[2024-12-22 18:11:14.787] model_type EPS
[2024-12-22 18:11:16.705] Using split attention in VAE
[2024-12-22 18:11:16.706] Using split attention in VAE
[2024-12-22 18:11:17.174] Requested to load SDXLClipModel
[2024-12-22 18:11:17.188] loaded completely 9.5367431640625e+25 1560.802734375 True
[2024-12-22 18:11:17.962] loaded straight to GPU
[2024-12-22 18:11:17.962] Requested to load SDXL
[2024-12-22 18:11:17.963] 0 models unloaded.
[2024-12-22 18:11:18.015] loaded completely 9.5367431640625e+25 9794.096694946289 True
[2024-12-22 18:11:18.688] Token indices sequence length is longer than the specified maximum sequence length for this model (211 > 77). Running this sequence through the model will result in indexing errors
[2024-12-22 18:11:18.694] Token indices sequence length is longer than the specified maximum sequence length for this model (211 > 77). Running this sequence through the model will result in indexing errors
[2024-12-22 18:11:20.233] C:\ComfyUI\ComfyUI\comfy\model_sampling.py:134: UserWarning: The operator 'aten::frac.out' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at C:\__w\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.)
  w = t.frac()
[2024-12-22 18:11:20.257] 0 models unloaded.

Other

tengo una RX 7800XT 32GB Ram y un Ryzen 7 5700x, quiero usar ComfyUI para ejecutar algunos modelos de fotos y despues usar Hyuyen para crear videos, gracias por tu tiempo.

@A-1223-I A-1223-I added the User Support A user needs help with something, probably not a bug. label Dec 23, 2024
@A-1223-I
Copy link
Author

another log

2024-12-22T17:48:41.295106 - [START] Security scan2024-12-22T17:48:41.295106 -
2024-12-22T17:48:42.156829 - [DONE] Security scan2024-12-22T17:48:42.156829 -
2024-12-22T17:48:42.160830 - ## ComfyUI-Manager: installing dependencies. (GitPython)2024-12-22T17:48:42.161831 -
2024-12-22T17:48:51.770192 - ## ComfyUI-Manager: installing dependencies done.2024-12-22T17:48:51.770192 -
2024-12-22T17:48:51.770192 - ** ComfyUI startup time:2024-12-22T17:48:51.771194 - 2024-12-22T17:48:51.771194 - 2024-12-22 17:48:51.7701922024-12-22T17:48:51.771194 -
2024-12-22T17:48:51.786904 - ** Platform:2024-12-22T17:48:51.786904 - 2024-12-22T17:48:51.786904 - Windows2024-12-22T17:48:51.786904 -
2024-12-22T17:48:51.786904 - ** Python version:2024-12-22T17:48:51.786904 - 2024-12-22T17:48:51.787903 - 3.10.12 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 19:01:18) [MSC v.1916 64 bit (AMD64)]2024-12-22T17:48:51.787903 -
2024-12-22T17:48:51.787903 - ** Python executable:2024-12-22T17:48:51.787903 - 2024-12-22T17:48:51.787903 - C:\Users\Administrator\anaconda3\envs\comfyui\python.exe2024-12-22T17:48:51.787903 -
2024-12-22T17:48:51.787903 - ** ComfyUI Path:2024-12-22T17:48:51.787903 - 2024-12-22T17:48:51.787903 - C:\ComfyUI\ComfyUI2024-12-22T17:48:51.787903 -
2024-12-22T17:48:51.787903 - ** Log path:2024-12-22T17:48:51.787903 - 2024-12-22T17:48:51.787903 - C:\ComfyUI\ComfyUI\comfyui.log2024-12-22T17:48:51.787903 -
2024-12-22T17:48:59.921672 -
Prestartup times for custom nodes:
2024-12-22T17:48:59.921672 - 18.6 seconds: C:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Manager
2024-12-22T17:48:59.922673 -
2024-12-22T17:49:01.693755 - Using directml with device:
2024-12-22T17:49:01.696815 - Total VRAM 1024 MB, total RAM 32694 MB
2024-12-22T17:49:01.696815 - pytorch version: 2.4.1+cpu
2024-12-22T17:49:01.697753 - Set vram state to: NORMAL_VRAM
2024-12-22T17:49:01.697753 - Device: privateuseone
2024-12-22T17:49:02.671844 - Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
2024-12-22T17:49:04.743914 - [Prompt Server] web root: C:\ComfyUI\ComfyUI\web
2024-12-22T17:49:05.589529 - ### Loading: ComfyUI-Manager (V2.55.5)2024-12-22T17:49:05.589529 -
2024-12-22T17:49:05.773654 - ### ComfyUI Version: v0.3.9-10-g57f330c | Released on '2024-12-22'2024-12-22T17:49:05.773654 -
2024-12-22T17:49:05.992684 -
Import times for custom nodes:
2024-12-22T17:49:05.993683 - 0.0 seconds: C:\ComfyUI\ComfyUI\custom_nodes\websocket_image_save.py
2024-12-22T17:49:05.993683 - 0.6 seconds: C:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Manager
2024-12-22T17:49:05.993683 -
2024-12-22T17:49:05.999686 - Starting server

2024-12-22T17:49:06.001768 - To see the GUI go to: http://127.0.0.1:8188
2024-12-22T17:49:06.186951 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json2024-12-22T17:49:06.186951 -
2024-12-22T17:49:06.253984 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json2024-12-22T17:49:06.253984 -
2024-12-22T17:49:06.265993 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json2024-12-22T17:49:06.265993 -
2024-12-22T17:49:06.317663 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2024-12-22T17:49:06.318662 -
2024-12-22T17:49:06.354730 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json2024-12-22T17:49:06.354730 -
2024-12-22T17:50:48.709080 - FETCH DATA from: C:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json2024-12-22T17:50:48.709080 - 2024-12-22T17:50:48.718288 - [DONE]2024-12-22T17:50:48.718288 -
2024-12-22T17:51:32.618788 - got prompt
2024-12-22T17:51:33.084317 - model weight dtype torch.float32, manual cast: None
2024-12-22T17:51:33.085319 - model_type EPS
2024-12-22T17:51:35.092883 - Using split attention in VAE
2024-12-22T17:51:35.095485 - Using split attention in VAE
2024-12-22T17:51:35.498831 - Requested to load SDXLClipModel
2024-12-22T17:51:35.529900 - loaded completely 9.5367431640625e+25 1560.802734375 True
2024-12-22T17:51:37.403498 - Token indices sequence length is longer than the specified maximum sequence length for this model (211 > 77). Running this sequence through the model will result in indexing errors
2024-12-22T17:51:37.413059 - Token indices sequence length is longer than the specified maximum sequence length for this model (211 > 77). Running this sequence through the model will result in indexing errors
2024-12-22T17:51:39.128897 - Requested to load SDXL
2024-12-22T17:51:39.128897 - 0 models unloaded.
2024-12-22T17:51:44.152140 - loaded completely 9.5367431640625e+25 9794.096694946289 True
2024-12-22T17:51:44.195603 - C:\ComfyUI\ComfyUI\comfy\samplers.py:838: UserWarning: The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at C:__w\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.)
if latent_image is not None and torch.count_nonzero(latent_image) > 0: #Don't shift the empty latent image.
2024-12-22T17:57:13.396364 -
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [05:29<00:00, 15.79s/it]2024-12-22T17:57:13.397379 -
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [05:29<00:00, 16.46s/it]2024-12-22T17:57:13.397379 -
2024-12-22T17:57:13.653466 - Requested to load AutoencoderKL
2024-12-22T17:57:18.255918 - 0 models unloaded.
2024-12-22T17:57:18.338614 - loaded completely 9.5367431640625e+25 319.11416244506836 True
2024-12-22T17:57:20.453124 - !!! Exception during processing !!! Could not allocate tensor with 530841600 bytes. There is not enough GPU video memory available!
2024-12-22T17:57:20.459112 - Traceback (most recent call last):
File "C:\ComfyUI\ComfyUI\execution.py", line 328, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "C:\ComfyUI\ComfyUI\execution.py", line 203, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "C:\ComfyUI\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\ComfyUI\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "C:\ComfyUI\ComfyUI\nodes.py", line 285, in decode
images = vae.decode(samples["samples"])
File "C:\ComfyUI\ComfyUI\comfy\sd.py", line 463, in decode
out = self.process_output(self.first_stage_model.decode(samples).to(self.output_device).float())
File "C:\ComfyUI\ComfyUI\comfy\ldm\models\autoencoder.py", line 209, in decode
dec = self.decoder(dec, **decoder_kwargs)
File "C:\Users\Administrator\anaconda3\envs\comfyui\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\Administrator\anaconda3\envs\comfyui\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ComfyUI\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 719, in forward
h = self.up[i_level].block[i_block](h, temb, **kwargs)
File "C:\Users\Administrator\anaconda3\envs\comfyui\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\Administrator\anaconda3\envs\comfyui\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ComfyUI\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 195, in forward
h = self.norm2(h)
File "C:\Users\Administrator\anaconda3\envs\comfyui\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\Administrator\anaconda3\envs\comfyui\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ComfyUI\ComfyUI\comfy\ops.py", line 126, in forward
return super().forward(*args, **kwargs)
File "C:\Users\Administrator\anaconda3\envs\comfyui\lib\site-packages\torch\nn\modules\normalization.py", line 288, in forward
return F.group_norm(
File "C:\Users\Administrator\anaconda3\envs\comfyui\lib\site-packages\torch\nn\functional.py", line 2606, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Could not allocate tensor with 530841600 bytes. There is not enough GPU video memory available!

2024-12-22T17:57:20.461112 - Prompt executed in 347.84 seconds

@Archkon
Copy link

Archkon commented Dec 23, 2024

You may need to manually type this command
python main.py --highvram

@A-1223-I
Copy link
Author

You may need to manually type this command python main.py --highvram

without the --directml? i already tried write python main.py --directml --highvram` but it didnt work

@Archkon
Copy link

Archkon commented Dec 24, 2024

You may need to manually type this command python main.py --highvram

without the --directml? i already tried write python main.py --directml --highvram` but it didnt work

Sorry,I didn't check your log provided.Maybe you need to try ZLUDA

@A-1223-I
Copy link
Author

You may need to manually type this command python main.py --highvram

without the --directml? i already tried write python main.py --directml --highvram` but it didnt work

Sorry,I didn't check your log provided.Maybe you need to try ZLUDA

I will try and i let u know, ty btw

@A-1223-I
Copy link
Author

You may need to manually type this command python main.py --highvram

without the --directml? i already tried write python main.py --directml --highvram` but it didnt work

Sorry,I didn't check your log provided.Maybe you need to try ZLUDA

hi bro, i installed everything to work witch zluda and in photos it works just fine but with Hunyuan for video now it gives me dis error "CUDA out of memory. Tried to allocate 10.10 GiB. GPU"

LOGS:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.10 GiB. GPU

Got an OOM, unloading all loaded models.
Prompt executed in 188.48 seconds
fatal: No names found, cannot describe anything.
Failed to get ComfyUI version: Command '['git', 'describe', '--tags']' returned non-zero exit status 128.

everything with the usages of the components of my PC seems work fine

@Archkon
Copy link

Archkon commented Dec 25, 2024

You may need to manually type this command python main.py --highvram

without the --directml? i already tried write python main.py --directml --highvram` but it didnt work

Sorry,I didn't check your log provided.Maybe you need to try ZLUDA

hi bro, i installed everything to work witch zluda and in photos it works just fine but with Hunyuan for video now it gives me dis error "CUDA out of memory. Tried to allocate 10.10 GiB. GPU"

LOGS: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.10 GiB. GPU

Got an OOM, unloading all loaded models. Prompt executed in 188.48 seconds fatal: No names found, cannot describe anything. Failed to get ComfyUI version: Command '['git', 'describe', '--tags']' returned non-zero exit status 128.

everything with the usages of the components of my PC seems work fine

That is the problem with hardware as you don't have enough vram.You could try using the quantized Hunyuan video model, and if it still prompts "CUDA out of memory," there is no other solution except upgrading to a better GPU.

@A-1223-I
Copy link
Author

You may need to manually type this command python main.py --highvram

without the --directml? i already tried write python main.py --directml --highvram` but it didnt work

Sorry,I didn't check your log provided.Maybe you need to try ZLUDA

hi bro, i installed everything to work witch zluda and in photos it works just fine but with Hunyuan for video now it gives me dis error "CUDA out of memory. Tried to allocate 10.10 GiB. GPU"
LOGS: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.10 GiB. GPU
Got an OOM, unloading all loaded models. Prompt executed in 188.48 seconds fatal: No names found, cannot describe anything. Failed to get ComfyUI version: Command '['git', 'describe', '--tags']' returned non-zero exit status 128.
everything with the usages of the components of my PC seems work fine

That is the problem with hardware as you don't have enough vram.You could try using the quantized Hunyuan video model, and if it still prompts "CUDA out of memory," there is no other solution except upgrading to a better GPU.

im gonna search for that hunyuan, i also managed to solve that problem dude, but i have a new one x,d RuntimeError: Storage size calculation overflowed with sizes=[1, -2130181503] do u know what could be? my storage ssd is 87GB free

@Archkon
Copy link

Archkon commented Dec 25, 2024

You may need to manually type this command python main.py --highvram

without the --directml? i already tried write python main.py --directml --highvram` but it didnt work

Sorry,I didn't check your log provided.Maybe you need to try ZLUDA

hi bro, i installed everything to work witch zluda and in photos it works just fine but with Hunyuan for video now it gives me dis error "CUDA out of memory. Tried to allocate 10.10 GiB. GPU"
LOGS: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.10 GiB. GPU
Got an OOM, unloading all loaded models. Prompt executed in 188.48 seconds fatal: No names found, cannot describe anything. Failed to get ComfyUI version: Command '['git', 'describe', '--tags']' returned non-zero exit status 128.
everything with the usages of the components of my PC seems work fine

That is the problem with hardware as you don't have enough vram.You could try using the quantized Hunyuan video model, and if it still prompts "CUDA out of memory," there is no other solution except upgrading to a better GPU.

im gonna search for that hunyuan, i also managed to solve that problem dude, but i have a new one x,d RuntimeError: Storage size calculation overflowed with sizes=[1, -2130181503] do u know what could be? my storage ssd is 87GB free

Could you provide more detail?What model and workflow are uou using?
It has nothing to do with your ssd.That is issue with pytorch, i guess.

@A-1223-I
Copy link
Author

You may need to manually type this command python main.py --highvram

without the --directml? i already tried write python main.py --directml --highvram` but it didnt work

Sorry,I didn't check your log provided.Maybe you need to try ZLUDA

hi bro, i installed everything to work witch zluda and in photos it works just fine but with Hunyuan for video now it gives me dis error "CUDA out of memory. Tried to allocate 10.10 GiB. GPU"
LOGS: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.10 GiB. GPU
Got an OOM, unloading all loaded models. Prompt executed in 188.48 seconds fatal: No names found, cannot describe anything. Failed to get ComfyUI version: Command '['git', 'describe', '--tags']' returned non-zero exit status 128.
everything with the usages of the components of my PC seems work fine

That is the problem with hardware as you don't have enough vram.You could try using the quantized Hunyuan video model, and if it still prompts "CUDA out of memory," there is no other solution except upgrading to a better GPU.

im gonna search for that hunyuan, i also managed to solve that problem dude, but i have a new one x,d RuntimeError: Storage size calculation overflowed with sizes=[1, -2130181503] do u know what could be? my storage ssd is 87GB free

Could you provide more detail?What model and workflow are uou using? It has nothing to do with your ssd.That is issue with pytorch, i guess.

i have the version fp16 and fp8 or something like that, for vae and diffusion model, I had some values too high which is why it gave the previous error and when I lowered it now this one appeared, the workflow is the same of the t2v of the examples

@A-1223-I
Copy link
Author

You may need to manually type this command python main.py --highvram

without the --directml? i already tried write python main.py --directml --highvram` but it didnt work

Sorry,I didn't check your log provided.Maybe you need to try ZLUDA

hi bro, i installed everything to work witch zluda and in photos it works just fine but with Hunyuan for video now it gives me dis error "CUDA out of memory. Tried to allocate 10.10 GiB. GPU"
LOGS: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.10 GiB. GPU
Got an OOM, unloading all loaded models. Prompt executed in 188.48 seconds fatal: No names found, cannot describe anything. Failed to get ComfyUI version: Command '['git', 'describe', '--tags']' returned non-zero exit status 128.
everything with the usages of the components of my PC seems work fine

That is the problem with hardware as you don't have enough vram.You could try using the quantized Hunyuan video model, and if it still prompts "CUDA out of memory," there is no other solution except upgrading to a better GPU.

im gonna search for that hunyuan, i also managed to solve that problem dude, but i have a new one x,d RuntimeError: Storage size calculation overflowed with sizes=[1, -2130181503] do u know what could be? my storage ssd is 87GB free

Could you provide more detail?What model and workflow are uou using? It has nothing to do with your ssd.That is issue with pytorch, i guess.

maybe if i update pytorch it will works?

@A-1223-I
Copy link
Author

hi i could make it work, by updating pytorch, installing triton and else, but now i generate my video and i end with a black output instead of the video, i tried installing sageattention but for some reason it didnt reconigze the ComfyUI (i think maybe i installed it wrong) what solution are for this issue of the black video?

LOGS:

loaded completely 13901.6892578125 13901.55859375 False
Input (height, width, video_length) = (512, 320, 29)
Sampling 29 frames in 8 latents at 320x512 with 20 inference steps
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [31:37<00:00, 94.87s/it]
Allocated memory: memory=24.366 GB
Max allocated memory: max_memory=27.514 GB
Max reserved memory: max_reserved=29.424 GB
Decoding rows: 100%|███████████████████████████████████████████████████████████████████| 22/22 [00:19<00:00, 1.12it/s]
Blending tiles: 100%|██████████████████████████████████████████████████████████████████| 22/22 [00:00<00:00, 50.40it/s]
C:\ComfyUI\Zluda\ComfyUI-Zluda\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\nodes.py:96: RuntimeWarning: invalid value encountered in cast
return tensor_to_int(tensor, 8).astype(np.uint8)
Prompt executed in 2152.29 seconds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
User Support A user needs help with something, probably not a bug.
Projects
None yet
Development

No branches or pull requests

2 participants