SDXL and TensorRT #13474
SuperSecureHuman
started this conversation in
Optimization
SDXL and TensorRT
#13474
Replies: 1 comment
-
If my understanding is right, the first thing would be to remove the abstraction of SDXL pipeline from HF, and do it manually. Then we start replacing the UNET with TRT variant to see the speedups |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I see a lack of directly usage TRT port of SDXL model. Would be cool to get working on it, have some discssions and hopefully make a optimized port of SDXL on TRT for A1111, and even run barebone inference.
I see that some discussion have happend here #10684, but having a dedicated thread for this would be much better.
Currently there is this - https://huggingface.co/stabilityai/stable-diffusion-xl-1.0-tensorrt
But it relies on docker container and not directly useable.
At end of the day, my goal would be to get SDXL running on TRT even on colab T4 (while the export can be on some beefy machine)
Edit - Revelant Issue ---> #12007
Beta Was this translation helpful? Give feedback.
All reactions