Replies: 4 comments
-
Qlora has been supported |
Beta Was this translation helpful? Give feedback.
-
Qlora support was added in release v0.4.0, you can read up on how to make use of the integration here: https://huggingface.co/blog/4bit-transformers-bitsandbytes P-tuning v2 is already supported in the prefix-tuning implementation! P-tuning v2, as mentioned in the original paper, can be viewed as an "optimized and adapted implementation" of prefix tuning. In fact the implementation of prefix tuning in PEFT: https://github.com/huggingface/peft/blob/main/src/peft/tuners/prefix_tuning/model.py (as of v0.5.0) is almost the same as the implementation in P-tuning v2 (https://github.com/THUDM/P-tuning-v2/blob/main/model/prefix_encoder.py) |
Beta Was this translation helpful? Give feedback.
-
It looks like this has been deprecated now as it's been stated. Does anyone know how to work around this? prepare_model_for_kbittraining This used to be the way to use it: ''' from peft import LoraConfig, get_peft_model lora_alpha = 16 peft_config = LoraConfig( |
Beta Was this translation helpful? Give feedback.
-
IN case anyone else sees this it looks like the only change is:
|
Beta Was this translation helpful? Give feedback.
-
Are there any plans to support Qlora and P-Tuning v2
Beta Was this translation helpful? Give feedback.
All reactions