-
-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] Add video2video mode (with in-painting and outpainting analogues for making vid from keyframes and AI-continuing vids) #4
Comments
Does adding this line of code allow us to go from video to video? |
This line of code points only to that we can replace latents with the video to process encoded via VAE. So it will require one more extra step |
PR adding denoising strength |
WIP If anyone knows how to fix it so the results don't look all washed out, that would be super helpful. |
Can we get a Controlnet Pose video 2 video ? Basically analyzing the frames poses of the character with the open pose model, saving these Controlnet pictures, loading them back with the controlnet plugin on and the openpose model off, rendering the rest of the vid 2 vid the controlnet frames check needs to be done sequentially, to limit the Vram usage |
@Apatiste sounds like a good idea for the future Deforum/text2vid integration, since Deforum already has ControlNet support |
Just like stable diffusion is transforming one picture into another one (or noise, if the input is not specified), this model is theoretically capable of transforming a video into another video, using text hints if we initialize the latents with the input video frames
https://github.com/deforum-art/sd-webui-modelscope-text2video/blob/857594d61ea776794296ffa6d256bf93eaa7fcd2/scripts/t2v_pipeline.py#L153
The proposed scheme (like img2img, but to videos)
The text was updated successfully, but these errors were encountered: