VRAM Estimator Extension #8330
space-nuko
started this conversation in
Show and tell
Replies: 2 comments 2 replies
-
I ran the benchmark on 8 Max Batch Count on 3090Ti and just kept doing it for 10 minutes, and was getting messages that cuda is out of memory. Should it go for that long? Tried on 4 Max Batch Count and was fine, took around 5 min. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I made an extension that estimates VRAM usage for
txt2img
andimg2img
supporting Hires Fix as wellhttps://github.com/space-nuko/a1111-stable-diffusion-webui-vram-estimator
Related issue: #1730
I don't know if it will work flawlessly on other peoples' systems yet but I get pretty accurate results (within 1 GB) with my 3090. It takes into account that using non-latent upscalers with Hires Fix will use more VRAM than latent upscalers, and that
img2img
consumes more VRAM thantxt2img
.How it works is you run a benchmark for a set of image/batch sizes on
txt2img
/img2img
and the extension will extrapolate those results to the config options you setAlso an interesting finding to come out of this, it seems like batch size does not correlate with VRAM usage with
txt2img
/latent Hires Fix in an obvious way, although the growth seems consistent within the same batch size. However it does appear to correlate forimg2img
and non-latent Hires FixBeta Was this translation helpful? Give feedback.
All reactions