-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Will the training code be published? #10
Comments
The training code is very specialized towards our infrastructure, and thus we do not have a plan to release a complete training script for now. But every module is there in the diffusers pipeline. It would be straightforward to write your own training script, though you may need to do some heavy engineering to utilize your IO/CPU and GPU/TPU/... efficiently to finish training in reasonable time. We do plan to release the ControlNet training part in the near future. A larger issue is how you deal with the camera pose. Because Zero123++ is all about reusing as much SD prior as possible for 3D; there can be only very weak prior for poses in SD. |
Though not official, you can try the finetune code from https://github.com/TencentARC/InstantMesh |
Thank you for your remarkable work. I am interested in training my own ControlNet, and I'd like to use conditions beyond just depth. Could you please let me know when you might release the ControlNet training part? |
I would like to make a model by having it learn in an original camera pose.
Hopefully the training code will be made available.
The text was updated successfully, but these errors were encountered: