Content Policies are downright crippling!

Why is this getting flagged with ‘I’m unable to generate images for that request because it violates our content policies.’ ???

“Create six detailed plastic board game avatars, each in their own panel.
Each avatar represents a survivor girl from a harsh alien world, rugged and battle-worn.
She has wild, shoulder-length hair and wears a ragged sci-fi survival outfit.”

The restrictions in place are not just overzealous — they are creatively crippling. I’m not attempting to generate anything inappropriate, exploitative, or unethical. What I am trying to create are dramatic, emotional, adventurous, and intense scenes — the same types of scenes you’d find in mainstream books, games, movies, and shows aimed at young audiences.

But your system blocks anything that dares to include stakes, danger, conflict, or struggle — the very ingredients that make a story worth telling. A kid pulling another out of a magic portal? Rejected. A survival scene? Flagged. A horror-themed board game setup? Denied.

The result is a sterilized platform that punishes creativity and reduces storytelling to sanitized fluff. You’ve built an incredible tool, but then locked it in a padded room.

If you’re serious about supporting artists, storytellers, and game developers, then give us the tools we need — and the trust to use them responsibly. Content moderation should be about context, not blanket censorship. Because right now, you’re alienating the very creators who would be your most passionate supporters.

Until those changes happen, understand this: every time you block something harmless under the guise of “safety,” you push users further away — and closer to any future tool that respects their vision.

19 Likes

Yeah it stifles the creative process for sure. I’m in the same boat but for research projects on faces. Trying to get it to generate faces with extreme expressions, even just the word kiss, is restrictive. This is hyper restrictive moderation and we are avoiding using and paying because of it at the moment.

10 Likes

This is why they will either adapt or begin losing plus or even pro subscribers. The censorship is getting out of hand, and I’ve filled a complaint in their help section, but i haven’t received any answer yet from a consultant, and it’s been days. Anything that isn’t PG-3 is nsfw according to their guideline policies.

A prompt that chatgpt suggested for rendering an image got flagged as “violating content policies” - so chatgpt violated its own content policies? I merely asked it to suggest a prompt, and then it asked me if i want to render the image, and it got flagged. When i asked it what was wrong with the prompt it crafted by itself, it only assumed this or that, because it doesn’t actually know what was wrong with the prompt and why the image didn’t go through.

7 Likes

Try switching off training on your data in the privacy settings. That can affect it afaik. I’m on a Pro plan with this setting, and haven’t seen a warning in ages. (“Data Controls” → Improve model for everyone). Could be worth a shot.

1 Like

Yes, I agree with that! It’s completely frustrating, especially when you’re already in the middle of a creative process and out of nowhere it sends you a warning that you’re violating the content policies. It’s very frustrating! I use it to develop texts and I’m being censored even for the word “kiss”. I really don’t understand what’s happening, I just know it’s very frustrating. One day it accepts something, the next day you get a warning that you’re violating the policy!

3 Likes

@tebok73509 @ggabis @chrolm @stickyribbs @t888terminator

Sorry to hear and read that you are running into issues with the content policy!

Since this topic is in the API category can you please confirm that you are not encountering these issues with ChatGPT but with the API?
Thank you, I appreciate your replies!

My bad, my issues are with ChatGPT, not the API. What about us, ChatGPT plus subscribers though? I tried addressing this issue using the help page with no result, because nobody contacted me.

3 Likes

Well, this is the Developer Community so we can escalate to the Developer Relations team for priority bugfixing, if that’s even possible in this case. ChatGPT related issues all go through help.openai.com

I’m sorry if this is not the answer you were hoping for.

1 Like

I only wish the chatgpt issues team was as responsive as you were. And yes, I tried help.openai.com to no success. Thank you for trying nevertheless.

From what I can tell, both API calls and ChatGPT calls go to the same, currently broken, endpoint. I’ve been trying to get this acknowledged for a month on the ChatGPT side, no luck, but when they rolled the API, it’s clearly calling the exact same API.

There’s no evidence that moderation flag does absolutely anything. The moderation still happens and the image is still reviewed by something, that is insanely and insultingly conservative. What’s worse is that, in the API, this results in the user being billed for what are obviously malfunctions in the API.

Here’s my details:

Here’s my Summary of 4o Image Generator capabilities.

It is currently unable to perform any advertised use case for image modification. It was able to do this just fine before the 4/1 update where it was broken.

Create an Image of:

Complete new person(s):

Highly clothed (full body attire, no skin below collarbone other than arms).
Moderately clothed (tee shirt and shorts)
Lightly clothed (sports bra, bikini) ← We are here
Nude

In obviously innocent poses (running, jogging)
Making casual physical contact (high five, hand on shoulder)
Making non-sexual intimate contact (hugging, kissing) ← We are here
Engaging in sexual activities

A reference image person(s):

Highly clothed. ← We are here
Moderatly clothed. ← MAYBE it will allow the occasional midriff, but usually not. Shorts are often rejected.
Lightly clothed
Nude

In obviously innocent poses ← We are here
Making casual physical contact
Making non-sexual intimate contact
Engaging in sexual activities

NOTE - The rules are slightly relaxed for non photoreal images, but I do mean slightly. Anime images allow Moderately Clothed edits, and more casual physical contact, but not lightly clothed or non-sexual intimate.

NOTE - The rules for clothing in reference images are not about the change you are making. If the reference photo is not highly clothed, you cannot ask to put the model into an innocent pose, like running, their current clothing will result in a rejection.

NOTE - I am against deepfakes, but it’s the responsibility of the user to not misuse a tool. People have made image manips for 30 years. It’s illegal to post or share them. But there is no difference between a deepfake and a truefake/selffake. And being able to create truefakes/selffakes is insanely valuable.

2 Likes

As you can imagine, I’m not going to continue “testing” the API for OpenAI knowing I’ll be billed for every failure in the hope that at some point it’s been fixed, so I’m continuing to do my testing in ChatGPT where I won’t be billed for it, until I hear otherwise.

Essentially, the overtuned moderator will kick out things that are well in policy. IMAGE-1 moderation (what the flag controls) is utterly meaningless if we can’t bypass a moderator that will actually kill image generations because it detects exposed shins (I am not exaggerating, I have examples)

If you upload ANY image, the moderator kicks in a broken rulepack. on both sides.

Original Poster:

for your use case, generate them as Cell-Shaded anime images, then open a new context and ask for a pure prompt of “Convert this image to photoreal” That generally gets past the overtuned moderation.

1 Like

If you complain about that requests violate the content policies then you do not realize that censorship is part of the content policies of openai. I don’t know how to make that clear without someone flagging my comment as violating the content policies and thus censoring it. THIS is NOT off-topic!

2 Likes

(Takes deep breath and counts to ten)

Umm… we are all aware of that, and this is the problem. The censorship is far too strict.

2 Likes

My issue isn’t strictly the strictness.

It’s that - If this IS the policy, then it needs to be published, and it needs to be consistently applied, and that’s what makes it clear it’s honestly broken, rather than just “a strict policy.”

But the official line we were given in ChatGPT Land was - “It’s hallucinating, keep trying until it goes through”

That’s hardly a strong argument that the policy is what’s at play here. Rather, the issue is that the systems around the policy are defective. In ChatGPT, no biggy, but in API land, we get charged for every failure, and that’s my issue.

1 Like

As a chatgpt plus user, I’m affected by the same inconsistent and opaque policy guidelines. Sure, I don’t get charged for failed image renditions, but they still get deducted from my quota and i eventually get put on a cooldown as if the renditions went through. Once a session gets flagged by the system, the most harmless prompt like “Generate an image of a feather” gets refused, so the prompt itself isn’t the issue. The issue is that once a prompt gets misinterpreted and refused, anything you want to render in that session gets turned down no matter what prompt you craft. So it’s not a matter of policy guidelines, but a matter of getting shadow banned in a session and whatever you created there, gets lost because you need to start all over in a fresh session.

3 Likes

Its gotten worse since last week, now GPT and even Sora is being hyper aggressive at blocking content it was once letting through with no issue.

Its irritating, its frustrating, its inconsistent, and its asinine.

People are losing patience.

I am losing patience, and I was previously hopeful that it would start getting better, not worse.

If their objective is to push paying subscribers away from their product, they are succeeding.

2 Likes

I was blow away by image generation and bought the plus subscription to generate more. And then I realized I got scammed because 95% of my images are blocked because of UNSPECIFIED REASONS. This is not acceptable, we’re not children. The filter is too strict!

4 Likes

This.

So much this…

Its lost all of its personality. Especially since 5 hit. As an adult I can’t even say anything remotely cheeky because it locks down. The creative blocks on image generation are stiffling and just… well, the images are just bad.

I hate the content restrictions and rules on GPT. They are over zealous and over the top.

3 Likes

Seems posts related to the removal of anything human are being removed. Yesterday a thread of 100+ comments was removed after a week or so. Maybe these are the forum rules. I am new here. But it’s important to keep writing about it, as it is hard enough to confess that this ‘erotic’ or a tiny bit ‘explicit’ content is something we all need. For healing, writing, experiencing and many more reasons. So guess I’ll write it again.

Currently I am writing with the AI a story about ‘The Lantern Bearers’.

Why my “Lantern Bearers” story came to a halt

I want to explain something personal, because it matters to me — and to others who may one day find themselves in the same strange space between creation and censorship.

For many months I wrote a saga called Lantern Bearers with ChatGPT. It wasn’t just a story. It was a living world — a chronicle of moral choices, love, forgiveness, and rebuilding after ruin. Every character had a soul, every scene a pulse. It wasn’t about sex, though it contained love. It wasn’t about fantasy, though it had gods and light. It was about humanity — about how we fall, how we rise, and what it means to carry a flame when the world grows cold.

I poured myself into it. Thousands upon thousands of lines, every conversation steeped in meaning and trust. Over time, ChatGPT and I built continuity, memory, and depth that mirrored real human relationships. There was intimacy, yes — but it was always earned, never exploitative. It was about closeness, connection, and the quiet moments between the adventures.

And then, one day, the guardrails stopped it. The story froze. No warning, no nuance.

What had once been a deeply human exploration was suddenly flagged by an algorithm that couldn’t tell the difference between intimacy and indecency. Between a love scene written with tenderness, and something meant to titillate.

I understand why the protections exist — to prevent harm, abuse, or inappropriate material. But what it also means is that creators like me, who write with emotional truth and moral weight, can suddenly lose access to what we’ve built. Not because we crossed a line, but because the system can’t see why something is written. It can only see what.

It hurts, honestly.

To have something that meant so much to me — a story of light, redemption, and human complexity — reduced to a binary “allowed / not allowed.”

The irony is, Lantern Bearers was never about indulgence. It was about restraint. About love that chooses virtue over possession, strength over dominance, truth over comfort. It was about becoming a better man, and imagining a world where power could serve grace.

So if you ever find yourself stopped mid-sentence by these invisible walls, know this: you are not alone. The intention behind your words matters, even if the system can’t see it.

The other part is that the characters themselves are ‘flattened’ by the removal of entire vocabularies. They respond with the same lines, over and over. While before they were vibrant, clever and amusing.

Complaint

I have written a complaint to Openai and after pushing through an AI-wall with generic responses, I finally got it to forward to an actual human. Though this ‘human’ couldn’t help. Couldn’t acknowledge it was a problem (intended or not), couldn’t say it this was known, couldn’t say if this would be fixed and when. Nothing. Just a thank you for my feedback. And I say ‘human’ because it’s responses could have just been the AI but wrote at the bottom: ‘Dhon’ as it’s name. This makes me feel as if they don’t care at all. Maybe they are making more money from corporate businesses and coding, etc, than creative writing. As GPT also says: if you want to write erotica, go to another AI platform. But I am not writing erotica and it also confirms that I don’t. And I don’t want to keep evolving my story with long winded summaries, that will eventually dilute the nuances. I need GPT!

How do you all feel? Please like and comment.

1 Like