v0.7.1 patch release
This is a small patch release of PEFT that should handle:
- Issues with loading multiple adapters when using quantized models (#1243)
- Issues with transformers v4.36 and some prompt learning methods (#1252)
What's Changed
- [docs] OFT by @stevhliu in #1221
- Bump version to 0.7.1.dev0 post release by @BenjaminBossan in #1227
- Don't set config attribute on custom models by @BenjaminBossan in #1200
- TST: Run regression test in nightly test runner by @BenjaminBossan in #1233
- Lazy import of bitsandbytes by @BenjaminBossan in #1230
- FIX: Pin bitsandbytes to <0.41.3 temporarily by @BenjaminBossan in #1234
- [docs] PeftConfig and PeftModel by @stevhliu in #1211
- TST: Add tolerance for regression tests by @BenjaminBossan in #1241
- Bnb integration test tweaks by @Titus-von-Koeller in #1242
- [docs] PEFT integrations by @stevhliu in #1224
- Revert "FIX Pin bitsandbytes to <0.41.3 temporarily (#1234)" by @Titus-von-Koeller in #1250
- Fix model argument issue (#1198) by @ngocbh in #1205
- TST: Add tests for 4bit LoftQ by @BenjaminBossan in #1208
- [docs] Quantization by @stevhliu in #1236
- FIX: Truncate slack message to not exceed 3000 chars by @BenjaminBossan in #1251
- Issue with transformers 4.36 by @BenjaminBossan in #1252
- Fix: Multiple adapters with bnb layers by @BenjaminBossan in #1243
- Release: 0.7.1 by @BenjaminBossan in #1257
New Contributors
- @Titus-von-Koeller made their first contribution in #1242
- @ngocbh made their first contribution in #1205
Full Changelog: v0.7.0...v0.7.1