You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I haven't fully fixed the script, but I'm really not sure how anyone has had success with it (eg the blog post). Many show stoppers when trying to get Textual Inversion working.
To debug, I started peeling away at the main Flux training script, and have fixed all the following bugs, BUT it still doesn't train well. With any hyperparams, it fails to learn a new concept, even to overfit on it, and quickly degrades. So the following are necessary to be fixed, but not sufficient to get this working.
On a 1-node-2-gpu setup, accelerator wraps everything in a DistributedSomething class, so all the calls to model.dtype/model.config etc error out, since they're wrapped, and you actually need something like model.module.dtype
it requires an --instance-prompt even if you're using a dataset with a custom text column
text_embeddings get saved weird, it looks like a single copy gets saved no matter the checkpoint number?
in log_validation there's a dtype mismatch runtime error when training in fp16/bf16, and you need to autocast
also in log_validation the Generator doesn't seem to do anything. Every generation gets performed on a different seed so you can't see the evolution/performance working on the same samples. This is remedied if you save the RNG state of torch/torch_cuda/random, then use the RNG Generator, then reset the random state afterward
t5 training doesn't work. On this line it unfreezes token_embedding which doesn't exist in t5. shared needs to be unfrozen.
Just a weird one, pulling the embeddings toward std**0.1 makes intuitive sense, but this kind of thing should definitely mention justification. Was this done in a paper somewhere?
Some nice to haves that I wish this did out of the box (but would be happy if it the simple path above "just worked"):
save latent cache to disk
aspect ratio bucketing
8bit backbone
Honestly, at this point i'm tempted to give up on diffusers and use something else for this current client, after sweating over fixing the above, i still haven't gotten it to work, and still haven't been able to stress test things like the dreambooth functionality, or the whole-text-encoder finetuning feature.
The text was updated successfully, but these errors were encountered:
@bonlime gah! I was referring between the 2 since the SDXL script does some things right that the flux script doesn't, so accidentally copied the wrong link. I've updated them thanks!
all problems still stand, and I'm happy to help out if someone wants to scour this with me and get it working
freckletonj
changed the title
Multiple bugs in flux dreambooth script: train_dreambooth_lora_sdxl_advanced.py
Multiple bugs in flux dreambooth script: train_dreambooth_lora_flux_advanced.py
Dec 20, 2024
I haven't fully fixed the script, but I'm really not sure how anyone has had success with it (eg the blog post). Many show stoppers when trying to get Textual Inversion working.
To debug, I started peeling away at the main Flux training script, and have fixed all the following bugs, BUT it still doesn't train well. With any hyperparams, it fails to learn a new concept, even to overfit on it, and quickly degrades. So the following are necessary to be fixed, but not sufficient to get this working.
accelerator
wraps everything in aDistributedSomething
class, so all the calls tomodel.dtype
/model.config
etc error out, since they're wrapped, and you actually need something likemodel.module.dtype
--instance-prompt
even if you're using a dataset with a custom text columnlog_validation
there's a dtype mismatch runtime error when training infp16/bf16
, and you need to autocastlog_validation
theGenerator
doesn't seem to do anything. Every generation gets performed on a different seed so you can't see the evolution/performance working on the same samples. This is remedied if you save the RNG state of torch/torch_cuda/random, then use the RNGGenerator
, then reset the random state afterwardt5
training doesn't work. On this line it unfreezestoken_embedding
which doesn't exist in t5.shared
needs to be unfrozen.std**0.1
makes intuitive sense, but this kind of thing should definitely mention justification. Was this done in a paper somewhere?Some nice to haves that I wish this did out of the box (but would be happy if it the simple path above "just worked"):
Honestly, at this point i'm tempted to give up on
diffusers
and use something else for this current client, after sweating over fixing the above, i still haven't gotten it to work, and still haven't been able to stress test things like the dreambooth functionality, or the whole-text-encoder finetuning feature.The text was updated successfully, but these errors were encountered: