Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support cuda malloc #16549

Open
wants to merge 3 commits into
base: dev
Choose a base branch
from
Open

support cuda malloc #16549

wants to merge 3 commits into from

Conversation

wkpark
Copy link
Contributor

@wkpark wkpark commented Oct 12, 2024

imported from comfy:
https://github.com/comfyanonymous/ComfyUI/blob/f1d6cef71c70719cc3ed45a2455a4e5ac910cd5e/cuda_malloc.py

original commits:

Description

detect supported GPUs and add PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync automatically.

  • support cuda_malloc
  • add -cuda-malloc, --disable-cuda-malloc cmd args

Checklist:

@wkpark wkpark requested a review from AUTOMATIC1111 as a code owner October 12, 2024 13:55
@FurkanGozukara
Copy link

@wkpark thank you so much

can you explain what cuda malloc exactly does and brings performance or memory optimization?

@wkpark
Copy link
Contributor Author

wkpark commented Oct 12, 2024

@wkpark thank you so much

can you explain what cuda malloc exactly does and brings performance or memory optimization?

Please refer to the following articles:
https://iamholumeedey007.medium.com/memory-management-using-pytorch-cuda-alloc-conf-dabe7adec130

and
comfyanonymous/ComfyUI@50bf66e

@FurkanGozukara
Copy link

@wkpark ty so much

why comfyui made it disabled by default do you know? we are all moved to torch 2.4

@wkpark
Copy link
Contributor Author

wkpark commented Oct 12, 2024

@wkpark ty so much

why comfyui made it disabled by default do you know? we are all moved to torch 2.4

that's not the latest change. already reverted two month ago. https://github.com/comfyanonymous/ComfyUI/commits/master/cuda_malloc.py

imported from comfy:
https://github.com/comfyanonymous/ComfyUI/blob/f1d6cef71c70719cc3ed45a2455a4e5ac910cd5e/cuda_malloc.py

original commits:
 - comfyanonymous/ComfyUI@799c08a: Auto disable cuda malloc on some GPUs on windows.
 - comfyanonymous/ComfyUI@D39c58b: Disable cuda malloc on GTX 750 Ti.
 - comfyanonymous/ComfyUI@85a8900: Disable cuda malloc on regular GTX 960.
 - comfyanonymous/ComfyUI@30de083: Disable cuda malloc on all the 9xx series.
 - comfyanonymous/ComfyUI@7c0a5a3: Disable cuda malloc on a bunch of quadro cards.
 - comfyanonymous/ComfyUI@5a90d3c: GeForce MX110 + MX130 are maxwell.
 - comfyanonymous/ComfyUI@fc71cf6: Add some 800M gpus to cuda malloc blacklist.
 - comfyanonymous/ComfyUI@861fd58: Add a warning if a card that doesn't support cuda malloc has it enabled.
 - comfyanonymous/ComfyUI@192ca06: Add some more cards to the cuda malloc blacklist.
 - comfyanonymous/ComfyUI@caddef8: Auto disable cuda malloc on unsupported GPUs on Linux.
 - comfyanonymous/ComfyUI@2f93b91: Add Tesla GPUs to cuda malloc blacklist.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants