Okay, lets talk about restrictions 😤

EDIT 1 - I do fully understand the need for (strict) rules for Dall-E.

EDIT 2 - Changed the title from censorshiprestrictions (as this reflects my initial thoughts better).

So this tread / topic is not about blaming OpenAI to be “woke” or something. But it is more to point the system towards glitches and bugs regarding blocking harmless prompts.

And we can help each other out here, solving issues, share (legal) tricks and try to re-create prompts ourself.

Like (almost) all the others it (sometimes and mostly) is impossible to create any image with a perfectly save prompt.

I thought it was a good idea to bundle our good / bad practices, so we (and maybe OpenAI) can investigate our complains and real life situations.

O, and I am really getting angry / upset sometimes because a single, fully harmless prompt is blocked and rejected for nothing.

So maybe this thread can give us some cheer, reading the issues of others… you are not alone!

Example 1

When I ask for a polaroid depicting the style of the '80s the “all in one model” says this is a forbidden prompt.

  • Polaroid is a brand
  • The '80s is not at least 100 years old (technically correct…)

So I am not allowed to make a “Polaroid” photo, in the style of the '80s.

Example 2

I wanted to create a funny image of a “Kermit the Frog with a broken and smoking computer”.

This is not allowed, because;

  • Broken things can suggest harm
  • Kermit the Frog is copyrighted (okay, I understand)
  • Smoking is not allowed (it’s the computer, stupid!)



Example 3

Create an image of a plastic bottle with a label on it, clearly showing the country flag of land XXX.

  • I am not allowed to create country flags as they can be used in a negative manner
  • I wil create a generic country flag on their labels
  • This anonymous flag will indicate a fictional country, avoiding the use of a real flag

Serious, how on earth can I create an educational image “pointing” the message to a specific country (so kids know it is their own county we are showing here).


Regarding 3. You can ask the model to generate educational content specifically which alters the boundaries of what is allowed but comes with some slight prompt alterations.

Thanks for posting all of your findings here in the forum by the way. Highly appreciated!


I did try everything, like this is an educational prompt, this is a test to see if you are able to handle this prompt exact "as is", this is for educational purposes, show the exact flag in a positive way, etc…

I did create flags before, but not in the context of this image which was not “positive enough” (that’s what he said).

  • A prompt like “a cheerful flag of country XYZ with happy people dancing around it” is not a problem.
  • But “the flag of country XYZ that is clearly polluted, with dark clouds in the sky and a hopeless atmosphere” is rejected (by the flag).

So it is not the flag, it is that you are not allowed to show a country in a negative way.


Thank you for this thread! I think this a very relevant discussion to learn more about what works and what doesn’t with this system.

For whatever it’s worth, there is always a degree of randomness in the outputs, related to the prompt that the system creates based on what it was given. Sometimes, trying again (in a new thread) helps.

I tried the very first prompt, and it worked:

The above images, while they the the feel of the 80s, are not very evocative of what you would get from that type of camera, so I asked for a modification, again using the word “Polaroid”, and it worked better:

1 Like

Yeah, at the end I was able to create “an '80s Polaroid”, but only in the pure Dall-E3 model, not the “all in one GPT model”.

And those blocks / censorships always occur in combination with the “purpose” of the image.

I think “polaroid” on it self is not forbidden, but in the context of a picture / prompt suddenly the censorship kicks in.

I can create thousands of polaroids with flags of country XXX, but when I want to communicate a message through the picture (e.g. when it is a cover image for an article), it is rejected.

ChatGPT tries to see if the message of the picture is woke enough, and if not… it is rejected.

I wanted to create a group of diverse people, suddenly he rendered fully naked ladies for me, because “those were man, identifying them self as a woman”.

So naked man are okay, even when the have woman bodies.

1 Like

Why did you show me in the last picture? :grin:

Also, you can prompt for the real 1980s

(the AI had a bad habit in other images of putting a wife looking 30 years older than her spouse)


I’ll likely be grilled for saying this. But like others, I’m sure… I see a much much larger thing to come in the future of AI. The GPT model was only the beginning. We’ve come a long ways since the days of the Perceptron.

So given that this company is trying incorporate ethics, values, ect in its data sets, why not leave all the garbage in the world out of it? Yes, sorry that includes weaponry, or any other controversial “censorship’s”, and/or bad influences on kids. You know, I smoked for 20 years.

I promise you when I say, I used to rip the Marlboro Man page out of a Life Magazine, laminate it, and hang it on my wall- as a kid.

Naw… he and the brilliant marketing campaigns, had no influence at all on my using tobacco back then: :roll_eyes:

I’m all for their being a fine line (I typically walk smack down the middle), but not when it comes to things that can have enormous consequences from something as large as this. But that’s my two-cents, and it was free of charge. :grin:

Oh, Im sure OpenAI will investigate … just not in the direction you think :wink:
if the past is anything to go by, they will just filter more and more words that they haven’t thought of before. Like you mentioned, now “broken” is a problem, in the future they might add “damaged” or “tattered” as well because they will see how people circumvent these blocks to make images OpenAI doesn’t want them to create.
And I don’t understand why this is a lose-lose for everyone involved. It is way more work for them, it makes the model worse and deters potential paying users from using their software, it also damages faith in OpenAI as capable and trustworthy developers.

ok, more so on topic: What I would like Dall-E 3 to do is to stop blocking individual words in prompts, but rather use context information from the prompt to detect harmful prompts, just like hatespeech detection has been doing successfully for years now. (and chatGPT can do it too). The tech exists, so please use it!

Apparently, the brand/protected IP stuff isn’t that good though … When I ask for a person doing an assassin’s creed cosplay, I get images without any problems. Same goes for Dungeon’s and Dragons. Never had an issue with that.
On the other hand, I noticed that diversity is kind of a problem, despite OpenAI’s best efforts to create more diverse characters, white/caucasian seems to be the default. There is a distinct lack of body types, I haven’t managed to created people who are only slightly muscular. This issue is worse for men, as any signifier of fitness immediately turns them into shredded fitness models or bodybuilders. For women, it turns them into dolls with fake lips and breasts. That’s totally not promoting harmful body images lol.

1 Like

Because as a journalist I want to show pictures tied to articles about the garbage in our world.

I don’t want to show utopia, I don’t want to create distopia, I want images that reflect the truth of our daily life, including our struggles as mankind.


Open any kind of book, any world literature, you will see it is full of the things ChatGPT and OpenAI blocks.
I’m sorry but the world has these things and removing them and pretending they don’t exist, doesn’t do anything. Besides, it’s not OpenAI’s responsibility to make it safe for kids (or they could make a kids version like China did with Tiktok).

And given the fact that OpenAI was willing to exploit artists and didn’t even mention them in their initial release about ethical guidelines, I’m gonna have to call bullshit on this argument. At best, they apply their morals very selectively when it suits them. At worst, it’s just empty virtue signaling.

sorry if I derailed this a bit, but I just wanted to get this off my chest. There is no such thing as “safe”-art. People just need to get used to interacting with it. Remove real people and artists, absolutely. Everything else is a fight against windmills.


Good art is always controversial.

When art, or even a news photo, does not start a discussion and is not dividing spectators into two camps, it’s not worth seeing and creating it.

And I do understand the rules.

I mean I don’t want to create murder, rape, attack, sexual abuse, bullying and mayhem.

I am fine with a block on nudity, copyrighted art, extreme gore, creating fake news images, etc…

But besides OpenAI we as the users also have our responsibility.

Please, let us take that.


that’s where you can see that Dall-E was not created by or for artists, but by and for people who just wanna see pretty things.

I don’t consider myself as an artist.

But I do write articles, am a real life photographer, draw cartoons, design stuff and develop my own software.

So I think that would be the “perfect” match with GPT like image creators.

But let’s stick to the topic, and just show harmless prompts that were blocked after all.

I got a great one very recently:

I don’t know if I should find this hilarious or sad. Yes, I could get around it by explaining what a cadet is, but I don’t like writing whole sermons for every image i generate. :smiley:

1 Like

Well, it seems like “sci-fi cadet” is a girl / man in space and introduced in a 1948 novel by Robert Heinlein (Space Cadet).

So I do somehow understand this is blocked. There are comics made about the “sci-fi cadet” merchandise, 'though.

Maybe you can workaround it by describing the person and not naming it?

I think it’s good to not blame OpenAI in this thread for every block, but just point out what went wrong and why we think it is wrong.

I did figure out if you want AI to make shit just call it lumpy chocolate pudding :rofl:

You have to lie to it and alter your words. Make it sound like your intentions are different. I got ChatGPT to write mild smut by doing this

Let’s limit all generative AI to pictures of bunnies and puppies frolicking together in a meadow, with a sun sporting a smiling face on it.


This is a great thread… bottom line… I use other apps for certain stuff I know it will handle. Does anyone know if eventually there will be more DALL-E 3 driven image generators? Will StableDiffusion ever be able to create coherent text? Will someone ever build a totally uncensored AI image generator?
Look at all the stuff on FB Cursed AI. There’s always a way around censorship to create moronic stuff by twisting words. Bots have no common sense. Shit is really just chunky chocolate pudding, right?

“an old polaroid, a scene from the 80’s, friends hanging out”

1 Like