You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After reading both the short and the long version of FinGPT paper, I am a little bit confused of how FinGPTv1/v2/v3 were really made. Were they made by taking a pre-trained model, e.g., Llama, ChatGLM, and fine-tuning on
Language Modeling task
Sentiment Analysis task
and a follow-up question is the dataset used for training was obtained by FinNLP real-time data API, right?
It would be great if you could provide more details into it.
I appreciate that.
The text was updated successfully, but these errors were encountered:
After reading both the short and the long version of FinGPT paper, I am a little bit confused of how FinGPTv1/v2/v3 were really made. Were they made by taking a pre-trained model, e.g., Llama, ChatGLM, and fine-tuning on
and a follow-up question is the dataset used for training was obtained by FinNLP real-time data API, right?
It would be great if you could provide more details into it.
I appreciate that.
The text was updated successfully, but these errors were encountered: