-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No way to recover from an invalid prompt #1
Comments
Thanks for your feedback @juliendorra! Apologies for how long it took to get back to you, we value the community's feedback and try to respond ASAP. Let me start by saying that I agree the current implementation of NSFW filtering leaves much to be desired. You're right we do not currently publish a list of banned words. We've added a card to our backlog regarding exposing the problematic word in the response and/or automatically filtering out the NSFW term. Once the upstream work for that has been completed we can add it to the project. This work will need to be prioritized against other work, so please be patient while we address this issue. |
Hi, an example, this is from a series of prompts that tell a whole story about a mum fixing toys for her kids…
This is doesn’t work either in the API or on studio, Invalid prompts detected 4 out of 5 images for this story had the issue. Even as an human, I have a hard time understanding what to remove… Any news on this? It's really blocking, as it can randomly block totally innocuous ideas like this and introduce random, uncontrollable and unfixable errors in the API, and thus in our apps 🙁 [edit: after split-testing, the only word that block the prompt is… kids. remove just this word and it works. Doesn't make a lot of sense… but at least it would be useful to get the word back, yes I know that would expose the list to brute-force finding, but…] |
I am having this same problem. These prompts are generated programmatically and I am using the Dreamstudio API. Not sure what about these prompts triggers the invalid_prompts error:
|
It happened again within the same day by different users of my service. It seems that the "Invalid prompts detected" exception will be raised based upon simply checking keywords in the prompt. I'm assuming that the prompts that refer to children generated the error.
|
Invalid prompts detected:
"teacher" is a prohibited word! I have to be candid: this is incredibly frustrating.
|
This is an invalid prompt. LOL. Just maddening.
|
Hello, I also meet with my users this same blocking of news on the evolution of these blockages? |
Same issue, totally kid friendly prompts getting this error. |
Yes, same for me. Very innocuous prompts are returning error 400 from the API, and it's happening frequently. If it's not addressed ASAP I'll need to switch to another provider for my diffusion. Honestly, an AI-based company can't implement a more sophisticated filtering model? Really? |
Yeah the invalid prompt thing is really irritating. It's not clearly communicated which words trigger are not allowed. I also use ChatGPT in order to generate prompts it irregularly throws errors. I'm also thinking about moving to another AI image provider at this point if this isn't fixed. After all, options are plenty out there at this point |
Basically anything that says "kid" is banned. Even "kids wearing clothing". I don't understand how the filter blocks this prompt out "a mother with 3 kids". I think the filter needs some positive examples with kids, not just negative ones. I'd be happy to help if it was open source because I really like the api! |
Quite annoying. Getting a lot of images not possible to generate based on story summaries. |
I decided to build an offering in this space instead of trying to wrestle with this issue. Check it out if you so wish: Diffute The service currently supports inferencing and training of Stable Diffusion models including Stable Diffusion XL 1.0. Feel free to ping me if you need capabilities that are not present today. I will happily add them. |
Having the same problem from time to time. For example, this prompt got the "invalid prompt" error:
Do you know what could be wrong with this prompt? I guess it's because of the word "child". |
Would you be able to publish the banned words so we can ask our prompt generator to avoid the them? |
+1 |
Same here. Looking for alternative solutions because of this. |
Boy is banned too :
My description are generated by chatgpt, which I instruct to be very SFW. This is going to be a major problem for me too. |
You should just have a nude filter like this to fix instead of the prompt |
Hi! There's an issue with Invalid Prompts in the API.
The way it is handled makes the API likely to error out an app without any automated way to fix the error.
I'm sending prompts through the API that are not manually written but are combination of sources. I cannot fully control the sources, as they depend on end user settings. My users don't manually write the prompts either.
Certain words in the combined prompt (even sometimes quite tame words, or polysemic ones that have a common safe meaning, but that's another, much wider issue) are triggering a 400 response "Invalid prompts detected"
This is the same behavior as the popup in the UI.
For a manual UI, it's an OK behavior, the human can try and guess the word. But for a machine-to-machine API, there's several issues:
(4. This is in addition to the previous issue that banned words in a negative prompt are also triggering an Invalid prompts detected error, which of course makes no sense)
My own preference as a developer would be for 2 and 3 to be available.
I know that if there was a 'auto filter' switch (3.), I would turn it on now and don't think about it anymore! Then maybe later I would try to use 2. to automatically rewrite Invalid Prompts (tamer synonyms, or maybe an ML solution)
I would love the feedback of the team and other users of the API on this
The text was updated successfully, but these errors were encountered: