Replies: 11 comments 21 replies
-
SDXL have been made public officially as well now, it is there on stabilityai profile of hugging face: |
Beta Was this translation helpful? Give feedback.
-
Currently, it is WORKING in SD.Next (Vlad) : |
Beta Was this translation helpful? Give feedback.
-
from the update message in command prompt on my last start of A1111 I figure SDXL support is somehow implemented now? How to run it, I've downloaded the model, can it just be run like any oder model? |
Beta Was this translation helpful? Give feedback.
-
Can someone for the love of I figure from the related PR that you have to use I have tried putting the base safetensors file in the regular models/Stable-diffusion folder but it does not successfully load (actually, it says it does on the command line but it is still the old model in VRAM afterwards).
|
Beta Was this translation helpful? Give feedback.
-
well it worked for me simply loading an SDXL model, the gui had a pretty hard time and my ram run out some times... I had to reboot and loading the XL model took about 85% of my 32gigs of ram... then once it was loaded it runs fine. My cli : set COMMANDLINE_ARGS=--no-half-vae --xformers --disable-safe-unpickle My rig : 32gb DDR5 and an rtx4090 24Gb of VRAM |
Beta Was this translation helpful? Give feedback.
-
Definitely RAM hungry, had to temp downgrade from 64GB of RAM to 16GB and loading it takes way too long since it spills over quite a lot to the pagefile. About 15 min to load if not longer, with heavy system load. Even this PR doesn't help #11958. The model itself works fine once loaded, haven't tried the refiner due to the same RAM hungry issue. There might also be an issue with |
Beta Was this translation helpful? Give feedback.
-
Which model should I use: the base or the refiner version? |
Beta Was this translation helpful? Give feedback.
-
I am getting horrible performance on a 10GB 3080. 4 Minutes for a 512x512. I tried with I added --skip-torch-cuda-test I did a fresh install of the webui with no extensions. It says its using 14GB/10GB of my VRAM. |
Beta Was this translation helpful? Give feedback.
-
I just tried SDXL 1.0 with 3060ti 8GB vram. It took around 20-40s for 1024x1024 |
Beta Was this translation helpful? Give feedback.
-
I thought --no-half-vae forced you to use full VAE and thus way more VRAM. I ran several tests generating a 1024x1024 image using a 1.5 model and SDXL for each argument. Using my normal Arguments 1.5 = 25s
1.5 = 1:30
1.5 = 45s RTX 3080 FE 10GB |
Beta Was this translation helpful? Give feedback.
-
Base works fine with a 4090 (~10 sec inference) and 64GB RAM (~23GB in use without any extensions/pipelines and browser open). Is there a way to enable refiner yet (without an extension)? |
Beta Was this translation helpful? Give feedback.
-
Hi,
When will we be able to use SDXL in the webui please? I have researcher access for 0.9
Thanls
Beta Was this translation helpful? Give feedback.
All reactions