Anyone had any success with this? #99
Replies: 3 comments 14 replies
-
Need a bit more detail to diagnose. |
Beta Was this translation helpful? Give feedback.
-
Here are some success cases for LoRA within the community, including mine: https://twitter.com/cloneofsimo/status/1603834716951830529 Also, if you are using A1111 dreambooth extension for LoRA it might be bit suboptimal. There are 4 major updates after the text-encoding fine-tuning : Pivotal Tuning Inversion, Multiword inversion, ROI conditioned loss, MLP tuning. My latest results with rank-1 output (0.8 MB output): |
Beta Was this translation helpful? Give feedback.
-
and is there a way to fine tune with captions? |
Beta Was this translation helpful? Give feedback.
-
I've tested this and it does what it says, fast speed (high batch count) and low vram usage (can get as low as 4.7gb). But the quality...
Regular dreambooth needs 100-200 steps per image to get it right (and maybe another 100-200 to get it perfect).
With the LORA , after 1000 steps per image it's not even close (i can see it's getting there, it has an idea of what's going on, but the results are like with DB after 20-50 steps).
Tried different learning rates, captions / no captions, etc.
Anyone had any notable success with it?
Beta Was this translation helpful? Give feedback.
All reactions