You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your great work. I wonder if it is possible to directly use alpaca_lora or stanford_alpaca to finetune 8B model in arbitary dataset. Can we access the code? Or we directly use block_expand to create a new model and then train that new model? Does this support huggingface version? Thanks.
The text was updated successfully, but these errors were encountered:
Yes, you can directly finetune the 8B model with any datasets. You can access the model in the huggingface (https://huggingface.co/TencentARC/LLaMA-Pro-8B). You can use it just like the normal LLaMA model.
Thanks for your great work. I wonder if it is possible to directly use alpaca_lora or stanford_alpaca to finetune 8B model in arbitary dataset. Can we access the code? Or we directly use block_expand to create a new model and then train that new model? Does this support huggingface version? Thanks.
The text was updated successfully, but these errors were encountered: