Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No way to recover from an invalid prompt #1

Open
juliendorra opened this issue Jan 6, 2023 · 19 comments
Open

No way to recover from an invalid prompt #1

juliendorra opened this issue Jan 6, 2023 · 19 comments
Labels
enhancement New feature or request

Comments

@juliendorra
Copy link

Hi! There's an issue with Invalid Prompts in the API.

The way it is handled makes the API likely to error out an app without any automated way to fix the error.

I'm sending prompts through the API that are not manually written but are combination of sources. I cannot fully control the sources, as they depend on end user settings. My users don't manually write the prompts either.

Certain words in the combined prompt (even sometimes quite tame words, or polysemic ones that have a common safe meaning, but that's another, much wider issue) are triggering a 400 response "Invalid prompts detected"

This is the same behavior as the popup in the UI.

For a manual UI, it's an OK behavior, the human can try and guess the word. But for a machine-to-machine API, there's several issues:

  1. As far as I know, we don't have a list of these words, to filter them beforehand.
  2. The API is not responding with the problematic word either, so the app at the other end of the API cannot act on this 400 (for example by removing the word and sending another request)
  3. The API is not offering an option to automatically filter out any supposedly NSFW word

(4. This is in addition to the previous issue that banned words in a negative prompt are also triggering an Invalid prompts detected error, which of course makes no sense)

My own preference as a developer would be for 2 and 3 to be available.

I know that if there was a 'auto filter' switch (3.), I would turn it on now and don't think about it anymore! Then maybe later I would try to use 2. to automatically rewrite Invalid Prompts (tamer synonyms, or maybe an ML solution)

I would love the feedback of the team and other users of the API on this

@johnsabath johnsabath added the enhancement New feature or request label Jan 8, 2023
@todd-elvers
Copy link

Thanks for your feedback @juliendorra! Apologies for how long it took to get back to you, we value the community's feedback and try to respond ASAP.

Let me start by saying that I agree the current implementation of NSFW filtering leaves much to be desired.

You're right we do not currently publish a list of banned words. We've added a card to our backlog regarding exposing the problematic word in the response and/or automatically filtering out the NSFW term. Once the upstream work for that has been completed we can add it to the project.

This work will need to be prioritized against other work, so please be patient while we address this issue.

@juliendorra
Copy link
Author

juliendorra commented Feb 3, 2023

Hi, an example, this is from a series of prompts that tell a whole story about a mum fixing toys for her kids…

The woman standing up, holding the robot toy in her hand. She is surrounded by two kids, a boy and a girl, both with big smiles on their faces. The kitchen table is now tidy, with the soldering iron and the circuit board off to the side. The woman is slim and has short, light-brown hair. She is wearing a white t-shirt, blue jeans, and glasses. The boy is wearing a blue t-shirt and blue shorts. The girl is wearing a yellow dress.

This is doesn’t work either in the API or on studio, Invalid prompts detected

4 out of 5 images for this story had the issue. Even as an human, I have a hard time understanding what to remove…

Any news on this? It's really blocking, as it can randomly block totally innocuous ideas like this and introduce random, uncontrollable and unfixable errors in the API, and thus in our apps 🙁

[edit: after split-testing, the only word that block the prompt is… kids. remove just this word and it works. Doesn't make a lot of sense… but at least it would be useful to get the word back, yes I know that would expose the list to brute-force finding, but…]

@rajbala
Copy link

rajbala commented May 9, 2023

I am having this same problem. These prompts are generated programmatically and I am using the Dreamstudio API.

Not sure what about these prompts triggers the invalid_prompts error:

Clouds, umbrella, and shield representing protection against failure
Broken cloud symbolizing cloud provider failure
Risk assessment matrix or scale to showcase different levels of risk
A solid foundation or base, possibly made of stone, supporting a structure
Interconnected cloud symbols, representing different providers working together
A safety vault or secure storage box, symbolizing secure data backup
A radar screen or monitoring dashboard displaying various metrics and alerts
A checklist or progress bar showing completion of tasks or updates
A group of people participating in a training session or workshop
A lighthouse or beacon, symbolizing guidance and protection against potential threats

@rajbala
Copy link

rajbala commented May 10, 2023

It happened again within the same day by different users of my service.

It seems that the "Invalid prompts detected" exception will be raised based upon simply checking keywords in the prompt. I'm assuming that the prompts that refer to children generated the error.

Colorful books stacked or arranged in a whimsical manner
A book with magical sparkles coming from its pages
A bookshelf filled with a variety of children's books
Illustrations of various diverse characters from children's books
A winding road or path representing a captivating plot
A beautiful, detailed illustration from a children's book
A child touching a book with interactive elements, such as pop-up features or textures
A calendar with designated reading times marked
An animated storyteller reading a book to an engaged group of children
A group of children excitedly gathered around a storyteller or a stack of books

@rajbala
Copy link

rajbala commented May 13, 2023

Invalid prompts detected:

evaluation, assessment, feedback, rating, teacher, management

"teacher" is a prohibited word!

I have to be candid: this is incredibly frustrating.

evaluation, assessment, feedback, rating, management

@rajbala
Copy link

rajbala commented May 13, 2023

This is an invalid prompt. LOL. Just maddening.

An airplane ascending into the sky, symbolizing the successful execution of the 30-60-90 day sales plan

@Arasiia
Copy link

Arasiia commented Jun 9, 2023

Hello, I also meet with my users this same blocking of news on the evolution of these blockages?

@rahul-littlegreats
Copy link

Same issue, totally kid friendly prompts getting this error.

@blistick
Copy link

Yes, same for me. Very innocuous prompts are returning error 400 from the API, and it's happening frequently.

If it's not addressed ASAP I'll need to switch to another provider for my diffusion. Honestly, an AI-based company can't implement a more sophisticated filtering model? Really?

@andreasjhagen
Copy link

Yeah the invalid prompt thing is really irritating. It's not clearly communicated which words trigger are not allowed.

I also use ChatGPT in order to generate prompts it irregularly throws errors. I'm also thinking about moving to another AI image provider at this point if this isn't fixed.

After all, options are plenty out there at this point

@turbobuilt
Copy link

Basically anything that says "kid" is banned. Even "kids wearing clothing". I don't understand how the filter blocks this prompt out "a mother with 3 kids". I think the filter needs some positive examples with kids, not just negative ones. I'd be happy to help if it was open source because I really like the api!

@simaofreitas
Copy link

Quite annoying. Getting a lot of images not possible to generate based on story summaries.
Any progress here? How can we avoid this?

@rajbala
Copy link

rajbala commented Jul 31, 2023

I decided to build an offering in this space instead of trying to wrestle with this issue. Check it out if you so wish: Diffute

The service currently supports inferencing and training of Stable Diffusion models including Stable Diffusion XL 1.0.

Feel free to ping me if you need capabilities that are not present today. I will happily add them.

@csarigoz
Copy link

csarigoz commented Aug 2, 2023

Having the same problem from time to time. For example, this prompt got the "invalid prompt" error:

Taylor, Andrea, USTA National Tennis Center - A stunning, soft-colored artwork of Taylor, a brave child, playing an exhilarating tennis match against formidable opponents.

Do you know what could be wrong with this prompt? I guess it's because of the word "child".
And do you know if there's a list of keywords that should be avoided in prompts?

@DarrenChenOL
Copy link

Would you be able to publish the banned words so we can ask our prompt generator to avoid the them?

@sharma0611
Copy link

+1

@edgardz
Copy link

edgardz commented Oct 10, 2023

Same here. Looking for alternative solutions because of this.

@RobinDenaux
Copy link

Boy is banned too :

A young boy and a friendly giant mech exploring a magical forest together.

My description are generated by chatgpt, which I instruct to be very SFW. This is going to be a major problem for me too.

@turbobuilt
Copy link

You should just have a nude filter like this to fix instead of the prompt

https://github.com/nipunru/nsfw-detector-android

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests