You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In simple terms, I would like support that allows users to customize their own adapter. I noticed that users only need to add a folder under this path src/peft/tuners and place some adapter-related files, usually config.py, layer.py, and model.py.
However, during implementation, I found that I also need to modify src/peft/utils/save_and_load.py/get_peft_model_state_dict to ensure that the custom adapter can be saved correctly. This is because the function is currently only adapted for existing adapters, so I have to modify the source code to ensure that the custom adapter can be used successfully.
PEFT is the most convenient and efficient fine-tuning library, and it would be even better if this feature were supported. Perhaps you’ve already implemented this functionality, but I haven’t found it yet. If so, please point it out. Thank you very much.
Motivation
I hope to use custom adapters to fine-tune large language models.
Your contribution
Currently, I have no clear ideas.
The text was updated successfully, but these errors were encountered:
You are correct, when adding a new method, besides creating a new directory inside of src/peft/tuners, there is also the necessity to make a few adjustments to other PEFT files (usually very small adjustments, but it depends on the type of adapter).
Regarding the change in get_peft_model_state_dict, yes, we can refactor this to work out of the box without further changes for new adapters, I'll take a look. But note that by adding new code into src/peft/tuners, you're already making changes to the PEFT code base, so editing get_peft_model_state_dict is not really that different.
Perhaps you’ve already implemented this functionality
No, but as mentioned, I'll look into this, I had planned to facilitate this for a long time. The only thing we have right now is a way to add new custom LoRA layers, but not yet to add completely new adapter types.
Feature request
In simple terms, I would like support that allows users to customize their own adapter. I noticed that users only need to add a folder under this path src/peft/tuners and place some adapter-related files, usually
config.py
,layer.py
, andmodel.py
.However, during implementation, I found that I also need to modify
src/peft/utils/save_and_load.py/get_peft_model_state_dict
to ensure that the custom adapter can be saved correctly. This is because the function is currently only adapted for existing adapters, so I have to modify the source code to ensure that the custom adapter can be used successfully.PEFT is the most convenient and efficient fine-tuning library, and it would be even better if this feature were supported. Perhaps you’ve already implemented this functionality, but I haven’t found it yet. If so, please point it out. Thank you very much.
Motivation
I hope to use custom adapters to fine-tune large language models.
Your contribution
Currently, I have no clear ideas.
The text was updated successfully, but these errors were encountered: