Replies: 3 comments 1 reply
-
Changed this library and I do not see any improvement. Seems like a dead end right now :( |
Beta Was this translation helpful? Give feedback.
-
I extracted the .so files from this deb to PYTORCH_PATH=$(python -c "import torch; print(torch.__path__[0])")
cp /data/cudnn/lib* "$PYTORCH_PATH"/lib/ to override the cuda files, and on an RTX 4090, this went from 5 it/s to 30+ . I think this is worth doing |
Beta Was this translation helpful? Give feedback.
-
I have just replaced the cudnn files as well, using a 4090 doubled the speed from 10it/s to 20it/s. I'm using an old CPU intel 6800k, so didn't expect the 30+ but still happy about the gain. |
Beta Was this translation helpful? Give feedback.
-
Reddit user reported that there is speed improvement when switching from python's version of libcudnn.so to actual file from CUDNN package for linux: https://www.reddit.com/r/StableDiffusion/comments/10fw843/397_its_with_a_4090_on_linux/ )
Seems like it is worth testing. Unfortunately I do not have knowledge of Python / CUDA things and maybe there is someone who will manage to test it quickly? If nobody check this out I'll try next week and I'll report here about what I've found.
Beta Was this translation helpful? Give feedback.
All reactions