This prompt via the dall-e-3 api causes a content_policy_violation:
“A colored, digital, illustration of An orangutan with a pristine, snow-white fur coat”.
This was triggered from my live application by a user, and is unexpected. This is not something I can prepare for, would love some more information on how to avoid these errors.
Yes, that’s an issue which hopefully will be resolved some day when AI companies have some legal ground to allow problematic requests in general.
I am not saying this specific prompt is problematic in my opinion. But, we did actually discuss something very similar a few months back. In this case the issue is, believe it or not, ‘snow-white’ as it carries racial, sexual and drug related connotations.
There is no immediate solution you can expect from OpenAI at this point in time. I propose you ask your users to rewrite the prompt or have the AI rewrite it for you.
It would be great if the security layer would assess the prompts based on their semantic meaning instead of issuing content violation warnings without further explanations.
While explaining what exactly is causing the content warning would likely be egregious to the unsuspecting user it should be possible to put a request into some form of context to assess if something is problematic or not.
But then again, some users will take the generated images out of the context they have been generated in and, ‘Bam!’, there is a racist Orang-Utan, generated by DALL-E 3.
Thank you for the responses all. The issue is certainly snow-white, likely due to it being copyright material, as well as an adjective.
Sadly, asking users to rewrite the prompt does not make sense in this context, and I already have the prompt engineered with certain rules before making the image request.
What would be far more helpful is a list of problematic words and phrases that I could avoid using in the prompt. Does OpenAI have such a list?