Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Custom Adapters #2273

Open
dgme-syz opened this issue Dec 11, 2024 · 1 comment
Open

Support for Custom Adapters #2273

dgme-syz opened this issue Dec 11, 2024 · 1 comment

Comments

@dgme-syz
Copy link

Feature request

In simple terms, I would like support that allows users to customize their own adapter. I noticed that users only need to add a folder under this path src/peft/tuners and place some adapter-related files, usually config.py, layer.py, and model.py.

However, during implementation, I found that I also need to modify src/peft/utils/save_and_load.py/get_peft_model_state_dict to ensure that the custom adapter can be saved correctly. This is because the function is currently only adapted for existing adapters, so I have to modify the source code to ensure that the custom adapter can be used successfully.

PEFT is the most convenient and efficient fine-tuning library, and it would be even better if this feature were supported. Perhaps you’ve already implemented this functionality, but I haven’t found it yet. If so, please point it out. Thank you very much.

Motivation

I hope to use custom adapters to fine-tune large language models.

Your contribution

Currently, I have no clear ideas.

@BenjaminBossan
Copy link
Member

You are correct, when adding a new method, besides creating a new directory inside of src/peft/tuners, there is also the necessity to make a few adjustments to other PEFT files (usually very small adjustments, but it depends on the type of adapter).

Regarding the change in get_peft_model_state_dict, yes, we can refactor this to work out of the box without further changes for new adapters, I'll take a look. But note that by adding new code into src/peft/tuners, you're already making changes to the PEFT code base, so editing get_peft_model_state_dict is not really that different.

Perhaps you’ve already implemented this functionality

No, but as mentioned, I'll look into this, I had planned to facilitate this for a long time. The only thing we have right now is a way to add new custom LoRA layers, but not yet to add completely new adapter types.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants