-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
frozen modules also be lora #2250
Comments
In general, you can control which module is targeted with LoRA by defining the |
Maybe it would be helpful to raise a warning message like |
Thanks for the suggestion, but I'm skeptical. This could lead to a lot of unnecessary warnings for users, when there was no warning previously. Having a lot of unnecessary warnings is dangerous as they can drown out important warnings and train the user to ignore them. Moreover, the logic would not be easy to implement:
I can see that as a user, I could make the wrong assumption that @onehaitao did. But nowhere in PEFT do we suggest that the logic for adding LoRA layers is related to whether the base parameter is frozen or not. We specifically describe how to change the modules to target. Therefore, I'd say the warning is not worth the tradeoff. The only thing I could see us adding is a mention in an appropriate place in the docs that the logic for modules to be targeted is not depending on whether the base weight is frozen. |
It would only check if frozen params will be unfrozen (due to peft). But yeah, the second point might be a hurden. Adding this info to the docs is, I think, the best way. |
Note that this should not happen, the base model parameters stay frozen. If I understood @onehaitao correctly, their complaint is that LoRA is applied to the frozen param, not that this param is unfrozen. |
Ah, you're right - sorry, I misunderstood the question ("modules" confused me here). Yeah, but I mean, that's the principle idea of LoRA 🤷🏻♂️ |
I think it will be better |
Well, I mean that if a frozen module is added to lora,original logic is changed (frozen module not participating in training) |
I understand that this could come as a surprise to some, but for the reasons given above, we don't want to make the LoRA targeting dependent on whether a module is frozen and I explained why giving warnings for this problematic. It is a good and common practice to print the model after applying PEFT to double check that it was applied to the desired modules. You can also inspect |
okay, I am currently using this method, but when the model is relatively complex, it seems not intelligent enough and can easily make mistakes |
If you have trouble targeting (or not targeting) specific modules, let me know. The config options should be quite flexible and handle almost all use cases. |
You might also check out the troubleshooting page, for example
etc. |
Thanks for your help very much. I have solved my problem by targeting specific modules. If you have any suggestions, please reply to me.
|
My model consists of a LLM and a ViT, ViT is frozen and LLM is unfrozen. ViT and LLM both have similar transformer layers, so if I target modules, it will easily take effect in both ViT and LLM if target modules is |
Glad you found a way that works for you. Just in case you did not know, |
System Info
Who can help?
No response
Information
Tasks
examples
folderReproduction
model = get_peft_model(model, lora_config)
Expected behavior
hello, i use lora finetune for my model. I found that lora will enabled in freezed module. For example,my model constists of module A and module B, A is freezed. I found lora will be added into module A after
model = get_peft_model(model, lora_config)
called.The text was updated successfully, but these errors were encountered: