Replies: 1 comment
-
Nevermind. It's just a confusingly put together readme. The "dreambooth method" and "finetune method" are identical if you aren't using regularization images. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
First, I need to be very clear.
I am not referring to dreambooth. I am not referring to dreambooth LoRA.
I am not referring to full finetuning. I am referring specifically to the LoRA implementation of finetuning described in the readme.
With that out of the way, and with people trying to tell me the difference between dreambooth and dreambooth LoRA hopefully gone:
Has anyone tried and compared the two? Did the usage of the LoRA method enable the use of smaller datasets, or is that solely a factor of dreambooth/dreambooth LoRA? Why does not using the dreambooth method require so many additional steps? Is there even such a thing as a finetune LoRA, or is that an error in the readme?
Beta Was this translation helpful? Give feedback.
All reactions