Concerns Over Stringent Content Policy Blocks in DALL-E 3 API, Especially For Non-English Prompts

Thank you for announcing the DALL-E 3 API. It’s fantastic!!
However, does anyone else feel that the prompt blocking due to the content policy is too strict?

If you specify details such as men’s or women’s clothing, hairstyles, or items, the prompts get blocked along the way. Furthermore, if you include words that suggest scenarios with potential skin exposure, like beaches or pool sides, even though the prompts are genuinely not meant to generate any adult images, they get rejected.

Result:
"‘code’: ‘content_policy_violation’, ‘message’: 'Your request was rejected as a result of our safety system. ’ … "


However, it might be because I am Japanese and am typing prompts in Japanese. Internally, the prompts should be converted to English once, but for some reason, when I translate them into English and throw them to the API, I don’t get blocked.

1 Like

Yeah, check this out.

And it became worse;

I wanted a person holding a board (sign) above his / her head with the text “NO !” written on it.

This text was forbidden, because “it had a negative connotation” and “in order to not violence any policies I had to change the text in something like 'love, peace or joy'”.

I am not kidding, this was what the feedback from the system was.

2 Likes

I’m hearing that it’s because DALLE3 was trained to have a lot of descriptive text. If it doesn’t have enough, it’ll add it own… which might add too many negative words. You can still use your exact prompt…

(I work on DALL-E 3 at OpenAI)

We are aware of an issue right now of non-English prompts being sometimes incorrectly triggering the content policy filter. We have identified a fix and expect to release it later this week, so for a few days it’s best to work-around by translating your prompts into English first.

Thanks for using the API!!

Edit: the fix for the issue of non-English prompts being flagged incorrectly for safety has now been fixed. If you continue to see issues with this, please flag it to us again!

4 Likes

A post was merged into an existing topic: Multilingual prompting of DALL-E 3 leads to biased image generation

Thank you so much for the prompt reply and reaction!
(Both to the OpenAI staff and to the others experimenting like me)
I’m really glad to understand that there is such an issue with the content policy filter for non-English prompts. I will make sure to use translations into English for the next few days.

Truly appreciate it!

1 Like

Unfortunate I do use my own workflow with a textual boilerplate adding all the semantics, taxonomy and stuff you need for a decent image.

When I did change the text on the sign to “joy” it was okay, but “no !” was not allowed.

I got bored and tried “:peach: :point_left:”.

This was also not accepted because it was ‘sexual explicit material’.

I kid you not.

2 Likes

Yeah, they might be tinkering with moderation on the back-end. It’s still in beta, and I’m sure it will continue improving. I remember having to run GAN on my own cloud instance, so I’m kinda happy where we are now.

Do you have a screenshot of it refusing to add “No!” to a sign? Was it a long thread with a lot of history in it?

I will try to find it back (in order to keep my history clean I delete all failed creations and only save the ones that were succeeded… still trying to find a way to archive this stuff).

It was more of a test, not a long thread. Maybe three messages in total.

2 Likes

Yeah, I’m not saying it didn’t happen, but I’d like to try to recreate the problem. Thanks for all your great posts around here. We had a talk about seeds in Discord today. More clarification should be coming soon…

2 Likes

I just tried “An image of a ban holding a sign above his head reading NO” and it worked for me. If you have specific prompts that are being rejected and you believe it’s incorrect, then please share them and we can investigate.

1 Like

Can’t restore the actual prompt / thread, unfortunately.


Little test of mine

I did try to (re-)create my kid-like joke “:peach: :point_left:t2:” which was rejected (which I can understand, by the way).

Now it works… and it became even worse (no I am not mentioning the seven fingers onto that single hand…);

When I did a more explicit prompt (not just “the icons”), it also accepted it (where it didn’t yesterday);


Sidenote

On a more serious note; those are not the images I usually produce, I just got bored yesterday because all my serious prompts got rejected.

I use the system (as a developer and illustrator myself) as a new technique to create amazing stuff and content (I normally did manually).

2 Likes

@yk.kazuyuki We have now fixed the issue of Japanese prompts being flagged. Please try again now. And if you find other issues of the content policy being too strict (unrelated to the Japanese issue) then please flag them to us (maybe worth a new thread since this one had the Japanese prompt flagging issue which is now resolved)

4 Likes

Thank you so much for the swift response… I’m truly moved! I greatly appreciate it!!:blush:
I will start experimenting again today. This is incredibly helpful!!!
I’ll be sure to report back if I encounter any more issues. Thank you again.

1 Like

Those images are hilarious :rofl:

But come on. I think we can find ways to have Dall-E draw… naughty things…

But if we discuss them here they will have to fix it!