Replies: 1 comment 3 replies
-
It depends on the fine-tuning technique, but yes, in general it should work. E.g. for LoRA, you can set the |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I'm wonder how (and if it is possible) to use Hugginface PEFT fine tuning with a subclassed pre-trained model with new layers.
Example :
Beta Was this translation helpful? Give feedback.
All reactions