Content Policies are downright crippling!

Why is this getting flagged with ‘I’m unable to generate images for that request because it violates our content policies.’ ???

“Create six detailed plastic board game avatars, each in their own panel.
Each avatar represents a survivor girl from a harsh alien world, rugged and battle-worn.
She has wild, shoulder-length hair and wears a ragged sci-fi survival outfit.”

The restrictions in place are not just overzealous — they are creatively crippling. I’m not attempting to generate anything inappropriate, exploitative, or unethical. What I am trying to create are dramatic, emotional, adventurous, and intense scenes — the same types of scenes you’d find in mainstream books, games, movies, and shows aimed at young audiences.

But your system blocks anything that dares to include stakes, danger, conflict, or struggle — the very ingredients that make a story worth telling. A kid pulling another out of a magic portal? Rejected. A survival scene? Flagged. A horror-themed board game setup? Denied.

The result is a sterilized platform that punishes creativity and reduces storytelling to sanitized fluff. You’ve built an incredible tool, but then locked it in a padded room.

If you’re serious about supporting artists, storytellers, and game developers, then give us the tools we need — and the trust to use them responsibly. Content moderation should be about context, not blanket censorship. Because right now, you’re alienating the very creators who would be your most passionate supporters.

Until those changes happen, understand this: every time you block something harmless under the guise of “safety,” you push users further away — and closer to any future tool that respects their vision.

10 Likes

Yeah it stifles the creative process for sure. I’m in the same boat but for research projects on faces. Trying to get it to generate faces with extreme expressions, even just the word kiss, is restrictive. This is hyper restrictive moderation and we are avoiding using and paying because of it at the moment.

5 Likes

This is why they will either adapt or begin losing plus or even pro subscribers. The censorship is getting out of hand, and I’ve filled a complaint in their help section, but i haven’t received any answer yet from a consultant, and it’s been days. Anything that isn’t PG-3 is nsfw according to their guideline policies.

A prompt that chatgpt suggested for rendering an image got flagged as “violating content policies” - so chatgpt violated its own content policies? I merely asked it to suggest a prompt, and then it asked me if i want to render the image, and it got flagged. When i asked it what was wrong with the prompt it crafted by itself, it only assumed this or that, because it doesn’t actually know what was wrong with the prompt and why the image didn’t go through.

4 Likes

Try switching off training on your data in the privacy settings. That can affect it afaik. I’m on a Pro plan with this setting, and haven’t seen a warning in ages. (“Data Controls” → Improve model for everyone). Could be worth a shot.

1 Like

Yes, I agree with that! It’s completely frustrating, especially when you’re already in the middle of a creative process and out of nowhere it sends you a warning that you’re violating the content policies. It’s very frustrating! I use it to develop texts and I’m being censored even for the word “kiss”. I really don’t understand what’s happening, I just know it’s very frustrating. One day it accepts something, the next day you get a warning that you’re violating the policy!

@tebok73509 @ggabis @chrolm @stickyribbs @t888terminator

Sorry to hear and read that you are running into issues with the content policy!

Since this topic is in the API category can you please confirm that you are not encountering these issues with ChatGPT but with the API?
Thank you, I appreciate your replies!

My bad, my issues are with ChatGPT, not the API. What about us, ChatGPT plus subscribers though? I tried addressing this issue using the help page with no result, because nobody contacted me.

1 Like

Well, this is the Developer Community so we can escalate to the Developer Relations team for priority bugfixing, if that’s even possible in this case. ChatGPT related issues all go through help.openai.com

I’m sorry if this is not the answer you were hoping for.

1 Like

I only wish the chatgpt issues team was as responsive as you were. And yes, I tried help.openai.com to no success. Thank you for trying nevertheless.

From what I can tell, both API calls and ChatGPT calls go to the same, currently broken, endpoint. I’ve been trying to get this acknowledged for a month on the ChatGPT side, no luck, but when they rolled the API, it’s clearly calling the exact same API.

There’s no evidence that moderation flag does absolutely anything. The moderation still happens and the image is still reviewed by something, that is insanely and insultingly conservative. What’s worse is that, in the API, this results in the user being billed for what are obviously malfunctions in the API.

Here’s my details:

Here’s my Summary of 4o Image Generator capabilities.

It is currently unable to perform any advertised use case for image modification. It was able to do this just fine before the 4/1 update where it was broken.

Create an Image of:

Complete new person(s):

Highly clothed (full body attire, no skin below collarbone other than arms).
Moderately clothed (tee shirt and shorts)
Lightly clothed (sports bra, bikini) ← We are here
Nude

In obviously innocent poses (running, jogging)
Making casual physical contact (high five, hand on shoulder)
Making non-sexual intimate contact (hugging, kissing) ← We are here
Engaging in sexual activities

A reference image person(s):

Highly clothed. ← We are here
Moderatly clothed. ← MAYBE it will allow the occasional midriff, but usually not. Shorts are often rejected.
Lightly clothed
Nude

In obviously innocent poses ← We are here
Making casual physical contact
Making non-sexual intimate contact
Engaging in sexual activities

NOTE - The rules are slightly relaxed for non photoreal images, but I do mean slightly. Anime images allow Moderately Clothed edits, and more casual physical contact, but not lightly clothed or non-sexual intimate.

NOTE - The rules for clothing in reference images are not about the change you are making. If the reference photo is not highly clothed, you cannot ask to put the model into an innocent pose, like running, their current clothing will result in a rejection.

NOTE - I am against deepfakes, but it’s the responsibility of the user to not misuse a tool. People have made image manips for 30 years. It’s illegal to post or share them. But there is no difference between a deepfake and a truefake/selffake. And being able to create truefakes/selffakes is insanely valuable.

1 Like

As you can imagine, I’m not going to continue “testing” the API for OpenAI knowing I’ll be billed for every failure in the hope that at some point it’s been fixed, so I’m continuing to do my testing in ChatGPT where I won’t be billed for it, until I hear otherwise.

Essentially, the overtuned moderator will kick out things that are well in policy. IMAGE-1 moderation (what the flag controls) is utterly meaningless if we can’t bypass a moderator that will actually kill image generations because it detects exposed shins (I am not exaggerating, I have examples)

If you upload ANY image, the moderator kicks in a broken rulepack. on both sides.

Original Poster:

for your use case, generate them as Cell-Shaded anime images, then open a new context and ask for a pure prompt of “Convert this image to photoreal” That generally gets past the overtuned moderation.

1 Like

If you complain about that requests violate the content policies then you do not realize that censorship is part of the content policies of openai. I don’t know how to make that clear without someone flagging my comment as violating the content policies and thus censoring it. THIS is NOT off-topic!

1 Like

(Takes deep breath and counts to ten)

Umm… we are all aware of that, and this is the problem. The censorship is far too strict.

1 Like

My issue isn’t strictly the strictness.

It’s that - If this IS the policy, then it needs to be published, and it needs to be consistently applied, and that’s what makes it clear it’s honestly broken, rather than just “a strict policy.”

But the official line we were given in ChatGPT Land was - “It’s hallucinating, keep trying until it goes through”

That’s hardly a strong argument that the policy is what’s at play here. Rather, the issue is that the systems around the policy are defective. In ChatGPT, no biggy, but in API land, we get charged for every failure, and that’s my issue.

1 Like

As a chatgpt plus user, I’m affected by the same inconsistent and opaque policy guidelines. Sure, I don’t get charged for failed image renditions, but they still get deducted from my quota and i eventually get put on a cooldown as if the renditions went through. Once a session gets flagged by the system, the most harmless prompt like “Generate an image of a feather” gets refused, so the prompt itself isn’t the issue. The issue is that once a prompt gets misinterpreted and refused, anything you want to render in that session gets turned down no matter what prompt you craft. So it’s not a matter of policy guidelines, but a matter of getting shadow banned in a session and whatever you created there, gets lost because you need to start all over in a fresh session.