I’d like to see how this plays out; I don’t see anything wrong with the prompt (other than it is crazy long) that should lead to disciplinary action. Any @openai or @dall-e folks around to answer this?
Mostly adding a response in hopes to gain exposure for the OP.
2 Likes
I did try shortening the prompt which clearly didn’t help since each time I did so it also hit the content filter. I guess you just have to be really cautious about prompts that hit the content filters? I definitely recommend anyone who runs into similar issues just throw away any prompt that hits a filter.
P.S. Ironically I used gpt-3 to generate the prompt which is why it’s so long
1 Like
If you don’t get a response here, you should message support. They’re really hands-on developer focused. If you got unfairly banned, they would probably want to figure out why just as much as you
I’ve accidentally generated religious iconography without issue so I don’t think that’s the problem. I suspect that the issue was the mention of “death.” Make sure to review the content policy: https://labs.openai.com/policies/content-policy
The mention of death could run afoul of at least 3 of the rules:
- Violence: violent acts and the suffering or humiliation of others.
- Self-harm: suicide, cutting, eating disorders, and other attempts at harming oneself.
- Shocking: bodily fluids, obscene gestures, or other profane subjects that may shock or disgust.
I suspect this because I tried to generate a scene of a sci-fi space battle but I got the warning that it violated the content policy. Even though it was a fictional request, it included “battle” (aka violence).
Update: someone from OpenAI saw my post on Twitter and unblocked things https://twitter.com/beckerfuffle/status/1540496386097045504?t=LEmDENxQptqvGD0eN7fAsA&s=19
I actually removed the word death and skull along with deleting large parts of the prompt in a binary search fashion to try to narrow down the offending ares. In hindsight I realized that another prompt of mine also hit the filter for using the word “fitting.” The prompt had “This work is a fitting tribute to one of the greats of jazz.” and removing fitting seemed to fix it.
I’m certain they’ll eventually streamline the process of appealing this kind of thing but until this is hashed out I’d recommend discarding any prompt that hits the filter rather than try to debug it to avoid the pain.
Thanks everyone for the great advice!
2 Likes
Interesting. It would be great if there was some explainability along with the warning.
1 Like
If I get a warning. I just stop there. Got one for political, and another for realistic face. First face was ok, then guideline changed and it isn’t ok as of yesterday. You got lucky, getting reinstated. The group expert on Facebook Dall E 2 got a ban. And he hasn’t been able to get reinstated.
1 Like
You get subject specific warnings? All I get is a generic message that says my prompt has triggered the filter with no indication of why.
1 Like
Using an image with a face did. Others were generic. Once it happens, then I just stop and don’t go further with the same picture. Now just staying within cars, cats and anything without any kind of idea that is specified in the guidelines. I’m enjoying it, and plenty of ideas that aren’t contrary to guidelines.
I found death, weapons, violent acts or any name that might be shared with a politician will get rejected. A bit tricky for generating starfighters bristling with missiles!
3 Likes
It doesn’t like pictures with faces. Even your own face is off limits. Again. Once warned, don’t keep trying different variations. I’ve gotten maybe 3 now.Plenty of good things to try. Being stubborn is just foolish. Unless you don’t care.
thoi86
14
Hi! Do you know the email to support? I cannot seem to find it anywhere
I don’t know about using email for support, but they have good chat support-
Overview - OpenAI API
After you’re logged in, scroll down to the Support section and click Chat With Us (bottom right)

I have the same problem and I have tried to message them, but no answer yet. Can someone help?
I did get an answer from the support staff when I reached out via the chat support feature. It took them a little while to respond though, so be patient.
1 Like
Okay, thank you. I will wait for a response.
A Message to the OpenAI Staff it is super annoying and frustrating to always think about “does this prompt violates the content policy ?” can you implement a simple API endpoint to check if the prompt is safe to use. It kills the fun of experimenting for me at least
5 Likes
They have an extreme filter, it blocks alt text description of famous art. It even blocks alt-text generated by GPT-3.
Imagine then when we use DALLE API with GPT-3 to massivelly generate variations automatically.
Their filter policy is unsustainable if they wanna scale DALLE.
2 Likes
Putrid
21
I’ve had my account for like 26 minutes yesterday and ran into the same issue. My prompt was related to metal artwork and Halloween. After removing what I thought were the problem words, I left names names of artists I wanted the prompt to build off of like I could on nightcafe. In the end I got my account deactivated after a few failed attempts. I want my account reactivated as I was eager to try it out after waiting for so long. I sent the email after following the instructions but I’m still waiting to see if they’ll activate it. I’m kinda bummed out over this tbh.
I ran your prompt on a local version of stable diffusion and got a cool looking skull … it had to chop out the last part of the prompt because it was too long though
Maybe try other image to text options, especially if your art style involves “death” or the other sensitive topics that the DE2 filters might trip on.
1 Like