What‘s the different between torch.manual_seed() and torch.Generator(device="cuda").manual_seed() ? #6611
Replies: 3 comments 5 replies
-
Diffusers has a nice article in the documentation about this: https://huggingface.co/docs/diffusers/using-diffusers/reproducibility#control-randomness For your question, even if you pass a CPU generator and the pipeline is in the GPU, the pipelines use a custom function from diffusers: where your answer is in the docstring:
As to how the seed works in the generation, I have a general understanding of it but I'm no expert so I prefer that someone else answers that question. |
Beta Was this translation helpful? Give feedback.
-
I forgot about the first question, there is no correct result in stable diffusion just different generations, you're not going to get the same result if you use the same seed in the CPU or in the GPU, even if you change the precision from Since the GPUs are made to be fast, they're less deterministic than the CPU, so if you want to be able to reproduce the image again, you should stick with creating the generator in the CPU. |
Beta Was this translation helpful? Give feedback.
-
@Joanne0720 can you try to create the generator on CPU?
|
Beta Was this translation helpful? Give feedback.
-
I'm using
generator = torch.Generator(device="cuda").manual_seed(0)
andgenerator = torch.manual_seed(0)
in StableDiffusionXLPipeline, and get different results.code:
image1:
image2:
I know that
generator = torch.manual_seed(1234)
is as same asgenerator = torch.Generator(device="cpu").manual_seed(1234)
But i'm confused about:
torch.Generator(device="cuda").manual_seed()
to get the correct result ?torch.Generator(device="cpu").manual_seed(1234)
, why is the result also "reproducible" if the random tensors are being created in the GPU ?Beta Was this translation helpful? Give feedback.
All reactions