You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi
I noticed that for generating the animation you have a limit of 196 frames, is this just so that it provides a quick result?If it is a limitation of the current model, would it possible to train a model to do more frames? I have been doing a quick look but couldn't find a limit in the training anywhere.
Also am I correct in understanding that the dim_pose variable is the number of unique poses in the dataset?
Thanks
The text was updated successfully, but these errors were encountered:
dim_pose is the dimension of each frame, which includes the 3D coordinates, velocity, rotation vection of each joint. The length limitation 196 is hard-code at here.
Currently, most data in the HumanML3D is smaller than 196 frames. If you want to train the model on your own dataset, you may change this limitation both in tool/train.py and datasets/dataset.py .
If you just want to generate a longer sequence with HumanML3D dataset, some tricks in video diffusion models can support it. I will update it in a few days.
Hi
I noticed that for generating the animation you have a limit of 196 frames, is this just so that it provides a quick result?If it is a limitation of the current model, would it possible to train a model to do more frames? I have been doing a quick look but couldn't find a limit in the training anywhere.
Also am I correct in understanding that the dim_pose variable is the number of unique poses in the dataset?
Thanks
The text was updated successfully, but these errors were encountered: