Replies: 1 comment
-
You can try that already on img2img, just feed it one of the training images, and the tag that came with it as prompt. I does remember a lot of that. For instance, Goku sucks most of the time, but songoku works better, because the tags had that in many of the training images. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Would something like that be theoretically possible?
Assume we have a model trained on a set of pictures with tags assigned to them. However due to the way SD works, it can't just "memorize" the training set and it won't produce the same image by using those tags. But what if we could feed one of the source images to it and perhaps the tags used during the training to make it "remember" better, so SD would analyze it like it does already with CLIP interrogation, and produce additional/changed prompt (perhaps along with a best suited sampler/cfg/steps settings) that would yield a picture that's as close as possible to the original, or at least similar in style and overall?
Why need it? To get a good starting point for building your prompts. Currently your best bet is using the minimal common set of tags to invoke the character you trained the model for and pray. But if you have a particularly good image in the dataset that you'd like to be able to reproduce, this prompt generation would help you to get a good starting point.
Beta Was this translation helpful? Give feedback.
All reactions