We could update to last Torch & Xformers? #16394
-
https://pypi.org/project/xformers/ https://pytorch.org/get-started/locally/ Those versions have some speed increase? |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 12 replies
-
Beta Was this translation helpful? Give feedback.
-
Definitely faster, went from 15 seconds to 13 seconds, but Adetailer face seems broken as a result, it finds literally 100 faces after making the change -- mesh still works. Went back to 2.3.1 for now. |
Beta Was this translation helpful? Give feedback.
-
I followed the advice by AIEXAAA and DvST8x and indeed, the generations are faster now! My SD1.5 test case went from 21 s to 19 s. Another major effect that was not mentioned - model loading is 4x faster now! |
Beta Was this translation helpful? Give feedback.
-
Yeah, if you use adetailer maybe hold off for a moment: Bing-su/adetailer#687 |
Beta Was this translation helpful? Give feedback.
-
На 4060ti прирост c 1.6 - 2 it/s Прирост 20%! |
Beta Was this translation helpful? Give feedback.
-
Interesting, today i test torch240+cuda12.4 vs torch212+cuda12.1 (nv4080, only sdp-no-mem, a1111 1.10.1) and old version much faster for me @SDXL 832x1216 - 6-7+sec vs 5sec (cold and normal bake) @SDXL 832x1216 hires x2 1.30+min vs <1min (sometimes cuda12.4 crash with out of memory). I think torch231+cuda12.1 also slower for me, dont sure |
Beta Was this translation helpful? Give feedback.
You can upgrade, but be careful about the CUDA version that corresponds to Xformers.
I have installed PyTorch 2.4.0 with CUDA 12.4. On an RTX 4080, SD1.5 is about 15% to 20% faster, and SDXL is about 10% faster.