Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loading LTX lora in Comfy #133

Closed
2 tasks
neph1 opened this issue Dec 22, 2024 · 4 comments
Closed
2 tasks

Loading LTX lora in Comfy #133

neph1 opened this issue Dec 22, 2024 · 4 comments

Comments

@neph1
Copy link

neph1 commented Dec 22, 2024

System Info / 系統信息

I've loaded the trained lora with the default lora loader in comfy, and I will say I'm not sure whether the issue lies with diffusers, finetrainers, or ComfyUI. But I will start here and move elsewhere if you think it's not an issue with finetrainers.
Using the default ComfyUI nodes, I get loads of these errors:

lora key not loaded: transformer.transformer_blocks.9.attn2.to_q.lora_A.weight
lora key not loaded: transformer.transformer_blocks.9.attn2.to_q.lora_B.weight
lora key not loaded: transformer.transformer_blocks.9.attn2.to_v.lora_A.weight
lora key not loaded: transformer.transformer_blocks.9.attn2.to_v.lora_B.weight

Thinking the format of the key looked a bit weird with the double "transformer", I wrote a simple script that removed "transformer." from the key name.
Then it loads fine in ComfyUI and affects the output.

The lora loads fine with a diffusers inference workflow, so maybe it is ComfyUI that needs to handle this type of key.
Looking at the code, they do support this type of diffuser lora with lora_A, lora_B in their name.

Then I did some digging into finetrainers code, and found this block:

transformer_state_dict = {
    f'{k.replace("transformer.", "")}': v
    for k, v in lora_state_dict.items()
    if k.startswith("transformer.")
}

So it seems it's not an unknown issue. :) But maybe the solution doesn't work as intended.

Information / 问题信息

  • The official example scripts / 官方的示例脚本
  • My own modified scripts / 我自己修改的脚本和任务

Reproduction / 复现过程

  1. Train an LTX lora with finetrainers
  2. Use the default LTX workflow in ComfyUI and add a LoraLoaderModelOnly node (same with the regular clip loading Lora Loader)
  3. Observe the errors

Expected behavior / 期待表现

I can load and use the lora in ComfyUI

@a-r-r-o-w
Copy link
Owner

As you suggested, this looks like something that will need to be handled by the comfyui nodes because the loras exported are in a different naming format. Based on the implementation used for training, the state dict will have to be renamed. I can look into supporting original format loras in Diffusers (should be quite easy and atmost an hour of work), but the ones trained from here will need separate conversion utilities. Gentle ping to @kijai if you would be able to help with this 🤗

@neph1
Copy link
Author

neph1 commented Dec 23, 2024

I believe I have the relevant file open, so I might give this a try in the coming days.

Edit: I have a speculative fix for this. I will make a PR, but have no idea whether my solution is suitable or not.

@neph1
Copy link
Author

neph1 commented Dec 23, 2024

Ref: comfyanonymous/ComfyUI#6174

@sayakpaul
Copy link
Collaborator

Since this is not directly related to finetrainers, I will close this. But you're more than welcome to re-open if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants