You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It looks like there are minor differences between the file in the repository and the one on Google Colab.
The notebook an Google Colab seems to be failing despite installing and importing bitsandbytes and accelerate.
I tried to run the notebook locally, but run out of memory, originally assuming that the pretrained model called in the example is not multii-gpu. (For this reason I used Llama, which I downloaded locally manually.)
I'm now running into issues in the training stage. I'm unsure why the labels include None values? Maybe this is being caused by a change I made, but I do not have a working baseline to inspect (since the google colab file is not working).
Can you check if there were any changes made to the data loading or preprocessing steps that might have introduced 'None' values in the labels and also see whether any modification you made in the code which can possibly affect the labels, during the training stage.
I'm having some issues running the following notebook locally:
https://github.com/AI4Finance-Foundation/FinGPT/blob/master/FinGPT_Training_LoRA_with_ChatGLM2_6B_for_Beginners.ipynb
Attached is my copy of the notebook. Wondering if someone can provide some helpful pointers. Been staring at this for a while.
FinGPT_Training_LoRA_with_ChatGLM2_6B_for_Beginners.zip
The text was updated successfully, but these errors were encountered: