Question: Setting the Lora scale via lora_alpha and cross_attention_kwargs gives different results #10024
Unanswered
laolongboy
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
LoRA rank is 128 and alpha is 64, so the scale is alpha / rank = 0.5.
I use function
save_lora_weights
to save to local and the state_dict file don't have alpha. During inference, I setcross_attention_kwargs={"scale":0.5}
but the result is not good.Then I try to initialize a
LoRAConfig
setting rank=128 and alpha=64, then useunet.add_adapter
andunet.set_adapter
, finally get the expected results.Can someone explain why?
Beta Was this translation helpful? Give feedback.
All reactions