Replies: 1 comment 1 reply
-
First, to troubleshoot your process:
Did you directly point the ControlNet to image (2), or did you extract a canny or soft edge layer from image (2) and point your control net to that layer? The latter is what needs to happen since your image (2) on its own does not resemble a canny or soft edge bitmap. Now that's out of the way... Was the problem that step 3 did not retain the color scheme from the result of step 2? It is sometimes tricky to control color at 100% strength generation. You may try to add color information (blue sky, white sail, sunny, etc.) into your prompt but your mileage may vary. Plus, there really isn't a true "color" or "hue" ControlNet out there. A couple of possible solutions I'd suggest:
LMK if this helps. |
Beta Was this translation helpful? Give feedback.
-
Heya all, I'm trying out Krita Ai diffusion to refine storyboards into colorscript. My process currently is to clean up the storyboard frame so there is no hatching as I found that works better with Scribble controlnet for the first iteration of the process.
So setup is to have a Scribble control net pointing at the image (1) layer with midway weight.
Next I usually need to paint over any issues and halucinations and do general image editing to get a nice base for the final image. You can see the result in image (2).
Now I'm stuck on the last phase which sometimes works but also sometimes produces the result as in the images below. The setup is to have the same prompt as for phase 1 but use Canny Edge or Soft edge with image (2).
If anyone has any suggestions please let me know.
1.
2.
3.
Beta Was this translation helpful? Give feedback.
All reactions