-
Notifications
You must be signed in to change notification settings - Fork 6.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could not allocate tensor with XXXXXXXX bytes. There is not enough GPU video memory available! #6168
Comments
another log 2024-12-22T17:48:41.295106 - [START] Security scan2024-12-22T17:48:41.295106 - 2024-12-22T17:49:06.001768 - To see the GUI go to: http://127.0.0.1:8188 2024-12-22T17:57:20.461112 - Prompt executed in 347.84 seconds |
You may need to manually type this command |
without the --directml? i already tried write python main.py --directml --highvram` but it didnt work |
Sorry,I didn't check your log provided.Maybe you need to try ZLUDA |
I will try and i let u know, ty btw |
hi bro, i installed everything to work witch zluda and in photos it works just fine but with Hunyuan for video now it gives me dis error "CUDA out of memory. Tried to allocate 10.10 GiB. GPU" LOGS: Got an OOM, unloading all loaded models. everything with the usages of the components of my PC seems work fine |
That is the problem with hardware as you don't have enough vram.You could try using the quantized Hunyuan video model, and if it still prompts "CUDA out of memory," there is no other solution except upgrading to a better GPU. |
im gonna search for that hunyuan, i also managed to solve that problem dude, but i have a new one x,d RuntimeError: Storage size calculation overflowed with sizes=[1, -2130181503] do u know what could be? my storage ssd is 87GB free |
Could you provide more detail?What model and workflow are uou using? |
i have the version fp16 and fp8 or something like that, for vae and diffusion model, I had some values too high which is why it gave the previous error and when I lowered it now this one appeared, the workflow is the same of the t2v of the examples |
maybe if i update pytorch it will works? |
hi i could make it work, by updating pytorch, installing triton and else, but now i generate my video and i end with a black output instead of the video, i tried installing sageattention but for some reason it didnt reconigze the ComfyUI (i think maybe i installed it wrong) what solution are for this issue of the black video? LOGS: loaded completely 13901.6892578125 13901.55859375 False |
Your question
hii the anaconda console tells me this:
Using directml with device:
Total VRAM 1024 MB, total RAM 32694 MB
pytorch version: 2.4.1+cpu
Set vram state to: NORMAL_VRAM
how can i set the vram to high? (mi GPU is 16GB Ram and the shared memory is using 1024MB) in the task manager it uses 16GB VRAM of Dedicated memory but shared it uses 1024GB, and it throws me that error sometimes, how can i fix it? (im not using webUI)
Logs
Other
tengo una RX 7800XT 32GB Ram y un Ryzen 7 5700x, quiero usar ComfyUI para ejecutar algunos modelos de fotos y despues usar Hyuyen para crear videos, gracias por tu tiempo.
The text was updated successfully, but these errors were encountered: