Replies: 1 comment 9 replies
-
@ShamilS So, one of the largest factors for repeatability will be the model you use, even more than your generation seed. (Over enough generations you will see familiar looks with your model, even with different seeds) The online version is "using the Stable Diffusion XL model" (from their FAQ). The screenshot of your offline version shows you are using the basic v1.5 model. These are are very different "libraries" of trained content. You won't be able to replicate the SDXL generation with the basic v1.5 model. There are folks that have trained the 1.5 model on images from the SDXL model to get a similar looks, but if you want to replicate exactly, you're going to need to get the same model. (Different GPU's will also introduce some variation in Seed replication, but that is an old issue you can look up). I still you SD 1.5 based models (but not the basic one) in my offline version, my GPU is not of the fan of the SDXL line. So you may run into that issue as well. Still I recommend getting a hold of the SDXL model, trying your generation again and seeing where you are then. |
Beta Was this translation helpful? Give feedback.
-
I have setup Stable Diffusion locally using [this setup instruction] (https://github.com/automatic1111/stable-diffusion-webui/wiki/install-and-run-on-nvidia-gpus).
It runs OK locally. Here is a
run.bat
start-up screen:I used a sample prompt:
and a seed value equal to
1234567
to generate images using Stable Diffusion Online and my local setup. The generated images look quite differently. Here is an online version of the generated image:and here is the local version:
What generation parameters uses online version? If they are known and set for the local generation would the local setup generate the same image or not? Does this generation depend not only on the Stable Diffusion Model version, generation parameters but also on the hosting hardware?
Beta Was this translation helpful? Give feedback.
All reactions