You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yes, the inference speed of LLaVA-1.6-34B is relatively slow, and each sample requires generating multiple QA responses. If you need to speed up data production, you might consider using 4-bit or 8-bit inference here. If you have sufficient GPU resources, you can also consider splitting the dataset and generating data in parallel.
The data conversion is very slow, about 6 minutes per frame. Why is it so slow?
The text was updated successfully, but these errors were encountered: