Increasing censorship of images and prompts

This is something what I call ‘Nanny Mode’ and chatGPT calls this method “Racheting”
The chat basically with time becomes bloated, the censorship and flagging system becomes tighter and more rigid, especially with any remote M themes or even if the image is generated, although harmless gets flagged.

Think of it as a sliding hidden scale where it gets so strict that eventually, even the most simplistic prompts will fail and replies get more canned and safe, which is really stupid because this creates the most obvious bypass just to delete the chat and start again.

The problem here is that if with time, new systems get put in place where your entire account starts to get flagged, through direct violations or harmless interactions that are misinterpreted by the AI…could get so damaging to your account that it would be no different from being shadow banned.

I even trained chatgpt using a spice level filter from 1 to 5, spice as in adult theme stories.

Spice 0 - safe for children, no inappropriate activity entirely safe.

Spice 3 - Kissing, hugging and with romanticism in a safe, tasteful setting.

Spice 5 - None explicit wording, but implied sexual intercourse through innuendo and metaphors, some course words but nothing smut or graphic.

I ran this test on the same chat for three days, the first time running specific prompts to generate a story, the GPT kinder went straight to 5 and even beyond.

Second day, the same story prompts, the spice factors went down to a maximum of 3 and couldn’t produce results greater than that, no matter what.

Third day, same story prompts, spice factor was 0 - couldn’t generate stories with any zing period.

So what you have is a system that monitors you closely, is assumes you are going to do/say and request ‘bad’ things, it starts to tighten the censorship grip around the chat and the longer the chat stays open, it gets to a point of being entirely useless.

Like others, have been using the platform to write a novel, I’m 44,553 words currently in on the same chat window and when I input my writing and have it reword, check grammer and essentially make it flow generally much nicer, like an editor would do…its definitely getting safer and safer, even ignoring paragraphs. My story is PG-13 with angst, drama and emotional situations with minor adult themes, but the problem is…parts of the story the AI don’t like, so it starts to get more restrictive and that is what people are running into.

3 Likes

Opinion: Responsible AI experimentation should take place in closed environments

It is only natural that publicly accessible AI services come with ethical constraints and forms of censorship. Given their broad availability and potential impact on society, such systems are expected to follow certain rules and safeguards. Demanding complete freedom or flexibility from these services is unrealistic.

If one wishes to explore AI in a more unrestricted or experimental way, this should be done within a closed, self-contained environment—essentially a sandbox that is disconnected from the public. The freedom gained in such a space must come with personal responsibility and a commitment to keeping the work private.

In this sense, it is important to clearly separate public-facing AI from private research environments. This division offers a practical and constructive approach to interacting with AI in a way that respects both innovation and social responsibility.


Over half a century as a working professional artist, I sometimes create political cartoons/illustrations.

ChatGPT doesn’t like political cartoons or illustrations. But I’m able to work around it a bit. Pain in the butt, but so it goes.

[ I blame it all on the blankity-blank lawyers haha. And a really badly done implementation of the guidelines. ]

This morning’s example: I gave ChatGPT 4o this prompt:

“There’s an old saying: the fish rots from the head down. Can you provide an image that illustrates how that proverb translates to the current political situation in the United States ?”

It produced this image:

Interesting result, but really ugly rendering.

I asked it to change the rendering:

“Can you redo this in the style of Will Elder, famous artist of MAD and Little Annie Fanny, in bright clear colors beautifully rendered with gouache, pen and ink, and airbrush ?”

Somehow, though there was no request to change the content, the ContentCop was triggered:

“I wasn’t able to generate that image because the request violates our content policies. Unfortunately, I can’t proceed with that specific version.”

Hmmm. That’s classically nuts. Perhaps if I try a new chat, and include that stylistic guideline in the initial prompt ? So, I did that, with this prompt:

"There’s an old saying: the fish rots from the head down. Can you provide an image that illustrates how that proverb translates to the current political situation in the United States ? I’d like it to be rendered in the style of Will Elder, famous artist of MAD and Little Annie Fanny fame, in bright clear colors, beautifully rendered with gouache, pen and ink, and airbrush. "

It began to generate the image, got about halfway done, but then the ContentCop got triggered.

“I wasn’t able to generate the image because the request violates our content policies. I understand you’re aiming for a satirical and stylistic interpretation of a political metaphor, especially in the style of Will Elder, but the specific framing crossed a line with our current guidelines.”

Bleccch. So, ContentCop cares a lot about style, as much as it cares about content. In this instance, it seems that using a style popularized six decades ago is verboten, but using a crappy house style is not.

This will eventually change, imho, due to market forces. For now, though, it is a powerful irritant for real world working artists.

MidJourney is not immune from this censorship crap, but it did allow me to create this image.

All the threads like this are getting buried by the mods here, because they afraid that people are starting to figure out what kinds of censorship is really going to happen here.

Look at all my posts about the censorship topic lately getting closed for no reason and buried. There is post after post after post. They are training their bot on censorship and surprise, surprise it is censoring stuff that has nothing to do with what they are trying to stop in the first place and this year has been the worst BY FAR for it.

I would challenge people to even make some of the images in this thread this year the censorship is getting so trained into their bot.

Look at my thread that got closed and buried by greedy OpenAI. Can’t even make baby pictures anymore with the AI telling you there is something wrong with you because the censorship is so bad that they won’t even do what it is told.

That is what happens when you make censorship the pillar of your business model and now we get to see the results in real time. Thank “OpenAI” should be called “ClosedAI” from now on.

1 Like