Replies: 2 comments
-
I agree, I feed the sketch to the ChatGPT to get the prompt and then amend it as needed. Your proposal would skip that step. |
Beta Was this translation helpful? Give feedback.
-
Changing the prompt when detailing is pretty essential too. It also could make regional text from a selection or layer. That's what I would like to have or something to name region prompts after the user splits the image into regional layers, attaching the layers to the right generated prompt. I think it would be within the same feature. It takes a bit of time manually and could use automation with interrogation. Letting one use the layer name string/text as a regional prompt were ideas I was going to try to hack in for my use, but a full feature with interrogation would be way better. |
Beta Was this translation helpful? Give feedback.
-
As text prompt can be thought of as the primary "control "for stable diffusion, features like CLIP interrogation, wd14 tagger, etc. could be integrated by adding a "star" button next to the prompt textbox, consistent with ControlNet UI.
Providing a way to hook up these vision models for prompt generation into Krita could be very helpful:
Thoughts?
Beta Was this translation helpful? Give feedback.
All reactions