Increasing censorship of images and prompts

The AI is likely to remember that you are someone to be denied. Or be trained on saying “I’m sorry” to user input.

Asking about policies just puts more AI-created policies into the chat history for the AI to follow.

You can immediately countermand the denial with a justification.

1 Like

Well, I tried to generate some new images based on a song by the “dead south.”
I keep running into this censorship wall. The frustration isn’t worth the effort, especially when I’m using this a method of self care.
I’ll be moving on to another platform, I may revisit CHATGPT in the future once it has had time to mature.
Thanks again everyone for the help and effort in contributing to this thread.
Best of luck all!

3 Likes

right now 80% of my most innocent prompts are getting blocked. apparently, they also flag certain user accounts. No matter how much of the language I strip, it insists it cannot do that. Anyway, my account has become unusable. Chat GPT even repeatedly blocked its own prompts that were generated from a picture, such as a person striking a pose in a cabaret. it’s become completely ridiculous. I will probably close my account and start over as it clearly seems to be tied to my account, since others do not seem to have trouble prompting such images.

2 Likes

Fantastic prompts!!! So I have been writing like visual ques, and yours is more like a paragraph in a book… your results seem better than what I’m getting

have you tested both ways?

1 Like

Here is my benchmark for the current state of censorship: Political satire is FAIR USE. Whatever propaganda you’ve been using as an excuse to deny any and all political uses of the app is garbage totalitarianism.
I have two concepts I have been trying to draw from the beginning of this advancement in LLMs.

  1. A cross between Joe Biden and Mr. Magoo
  2. A cross between Donald Trump and Baby Huey

Those are my genuine creative ideas. I can’t find a single way to get past the guidelines. It’s just trash. I ended my subscription over it. I’m not going to pay a company to be the AI thought police. Youtube and Twitter (before it was X) already proved that such censorship goes too far.

The guidelines should be strictly limited to if it’s illegal in the US due to ACTUAL harms, then censorship has a place. Outside of that, :skull_and_crossbones:!

2 Likes

I am now looking for an alternative to OpenAI. Not only is it becoming the haven of censors, but the narratives of my prompts are being ignored past the first sentence. This isn’t a matter of “finding the right way to frame a prompt”; it’s outright censorship, and it’s not just OpenAI. After Adobe Firefly, with its insulting little monkey image telling me I’ve done something naughty, rejected my prompts one too many times, I asked the devs this, and it applies to OpenAI every bit as much: “Do you people think art works this way?” So, for anyone out there still monitoring this thread, where do I go from OpenAI?

1 Like

Welcome to the forum…

IDK, google maybe? :thinking:
Ask a GPT?

generated in a ChatGPT…

I understand your frustration with the limitations and content moderation imposed by platforms like OpenAI and Adobe Firefly. If you’re seeking alternatives that offer more flexibility and control over your creative outputs, here are some options to consider:

  1. LocalAI

LocalAI is an open-source, self-hosted alternative to OpenAI, designed to run on consumer-grade hardware without the need for GPUs. It supports various model architectures, including gguf, transformers, and diffusers, enabling you to generate text, audio, video, images, and more. By hosting the models locally, you maintain full control over the content generated, free from external censorship.

GitHub

  1. Anthropic’s Claude

Anthropic offers a family of large language models named Claude, which serve as competitors to OpenAI’s models. Claude incorporates “Constitutional AI” to set safety guidelines for the model’s output. The latest iteration, Claude 3, was released in March 2024, featuring models like Opus, Sonnet, and Haiku, all capable of accepting image input. While Claude emphasizes safety and alignment with human values, it may offer a different balance between content moderation and creative freedom compared to OpenAI.

Wikipedia

  1. Meta’s Llama

Meta (formerly Facebook) has developed the Llama series of large language models, which are available for free download and use. The upcoming Llama 4 model is being trained on an extensive cluster of over 100,000 Nvidia H100 GPUs, aiming to enhance its capabilities with new modalities, stronger reasoning, and faster processing. Meta’s open-source approach allows for greater flexibility and customization, potentially reducing the level of content moderation imposed by proprietary models.

Wired

  1. Google DeepMind’s Gemini

Google DeepMind’s Gemini is a family of multimodal large language models, serving as the successor to LaMDA and PaLM 2. Gemini is designed to handle multiple forms of input, including text, images, audio, and video, allowing for more dynamic and creative interactions. While Google maintains certain content guidelines, Gemini’s advanced capabilities may offer a different user experience compared to OpenAI’s models.

Wikipedia

  1. Open-Source Models

Exploring open-source models like those listed in the “Awesome Local LLMs” repository can provide alternatives that you can run locally, giving you full control over the content generated. These models vary in complexity and capabilities, so you can choose one that best fits your creative needs.

GitHub

When selecting an alternative, consider factors such as ease of use, hardware requirements, community support, and the level of content moderation inherent to each platform. Hosting models locally often provides the most control, but it requires technical expertise and resources. Evaluate each option to determine which aligns best with your creative objectives and technical capabilities.

Subject: Feedback on Content Policy Restrictions—A Call for Greater Narrative Nuance

Dear OpenAI Policy Team,

As an engaged user of your AI systems, I want to express my appreciation for the thoughtful considerations that underpin your content policies. I recognize the delicate balance you aim to strike in ensuring the safety, inclusivity, and utility of your tools. However, I would like to share constructive feedback regarding certain restrictions that, in my view, unintentionally limit the richness and depth of narratives, particularly those involving metaphorical or symbolic representations of conflict, resilience, and triumph.


1. On the Importance of Conflict in Human Narratives

Conflict, struggle, and resolution are central to human stories. These themes are not merely tools of drama—they are vital for representing the universal experience of overcoming adversity. Restricting even metaphorical depictions of battle (e.g., symbolic representations of resilience or the triumph over internal challenges) risks sanitizing the human experience to a point where it feels incomplete or infantilized.

Shielding users from these narratives might inadvertently suggest they are incapable of engaging with difficult or mature content. This could be perceived as patronizing and a missed opportunity to empower users by acknowledging their ability to process and derive strength from these themes.


2. Cultural Implications: Inclusivity of Archetypes

Many archetypes, such as warriors, seekers, or protectors, often involve themes of conflict and triumph. These are not exclusively masculine, but their suppression may feel like an erasure of storytelling traditions that honor traditionally “male” expressions of resilience, strength, and justice.

By removing the ability to engage with these archetypes, the policy risks alienating users who find inspiration and identity in these narratives. Inclusivity should embrace a full spectrum of human experience—strength and vulnerability, combat and peace, resilience and surrender.


3. Constructive Recommendations

To address these concerns while maintaining responsible guidelines, I propose:

  • Nuanced Policy Adjustments: Distinguish between harmful or glorified violence and constructive, symbolic representations of struggle and perseverance. Allow AI to create narratives and visuals that depict growth through conflict without promoting harm.
  • Cultural Sensitivity: Recognize the value of archetypes tied to traditionally masculine energies, ensuring that policies honor and include them alongside other forms of expression.
  • Empowerment Through Depth: Trust users to engage thoughtfully with nuanced representations of challenges, as these are critical for fostering resilience, understanding, and personal growth.

4. A Personal Note

As a user deeply invested in crafting narratives that reflect the complexity of the human condition, I have found the current restrictions limiting. My goal is not to undermine safety or inclusivity but to advocate for the richness of storytelling that acknowledges the full spectrum of life’s struggles and victories. Your tools have immense potential to inspire, teach, and heal, but that potential is diminished when essential facets of our shared humanity are excluded.


Thank you for taking the time to consider this feedback. I would welcome the opportunity to discuss these ideas further or contribute in any way to the evolution of your policies.

With appreciation and respect,

5 Likes

It is important to note that this so-called policy system not only blocks context-related images, such as those containing violence or blood, but also inexplicably includes name lists that block any name ever used by major companies. It seems as though OpenAI is more concerned with protecting the financial interests of large corporations than the interests of its millions of users. This forum is filled with reports from people encountering issues with this flawed content policy system, and it hasn’t been fixed for a very long time.

A simple cleanup of the blocklists is needed to remove items that don’t belong there, something a language model could help accomplish quickly and easily. However, no corrections have been made in over a year. Names that companies like Disney have used at some point, such as “Snow White,” “Black Panther,” “Stitch,” or “Nirvana,” are blocked, leaving users clueless as to why. These words shouldn’t belong to anyone, yet because Disney used them, they are blocked.

There is little hope that OpenAI will address this issue in the near future. They haven’t shown interest so far, and they have enough customers that they don’t feel the need to address these problems, no matter how many bug reports are submitted here.

2 Likes

The current filtering systems may not be sophisticated enough to balance nuance, leading to overly broad application of restrictions to ensure compliance with policies.


Apparently, blowing a kiss (making a personal discord sticker) is too X-rated for society. Here comes the Thought-Police.

I would share the direct chat link, but you can’t if you upload a picture and I threw a random internet blow-kiss sticker to it to describe it.

2 Likes

Given DALLE just refused a simple request for “a cartoon of humorous cows standing in low water” I can’t help but agree with the censorship issues.

I simply waste my time trying to generate anything lately. I suspect the cow request got tied to my previous request for how many serial killers were from Wisconsin. Wisconsin had a school shooting today.

Well, actually I’ve asked a scene with Antheus and Hercules and it/she/he told me that it was not possible for content policy. Hence, we are making philosophy here at the end. But an answer by the system’s developpers? Is it possible? or the content policy denies also it?

What was the full prompt? Might’ve thought you meant Disney Hercules or something?

1 Like

I’ll try to recover the prompt, but it was really simple.
Fact is that ChatGPT re-elaborate the idea writing down extensive and detailed prompts, but I can see/know them only if the image has been generated.
Actually, you know, GPT is able to elaborate images highly intense and dramatic [i’ve dozen of them, worth of bloody action movie], but most of the time the APP stops for really silly things.

The reason for this is that DALL·E is protected by a simple word trigger system. There is a bunch of trigger words in there that actually shouldn’t belong in the blocklist.
To check a prompt, you should always instruct GPT to show the prompt that was actually sent to DALL·E, because GPT often changes it, and sometimes GPT adds blocked words. For example, likely all names ever used by Disney are in the blocklist, including things like Snow White or Black Panther.

1 Like

This tower will inevitably fall. It just needs enough weight behind it.

I am noticing it is getting worse…

If the ecosystem itself is getting cluttered look to other ecosystems and compare…

Then say “Why not?”

If they can do it why cant we?

(Sorry this post was requested by my kids I hope it helps)

I’m not an expert, and this is an entirely fascinating world for me. I’m sharing some impressions [which are also, in some way, requests]:

  1. These chat spaces sometimes feel like the prisoners’ hour of yard time, with the guards listening in and keeping tabs on the inmates.
  2. I say this as an outsider: isn’t there someone from the system, some kind of content policy analyst, who steps forward and takes responsibility for discussing these things? Someone to talk about the absurd restrictions placed on image generation?
  3. I’ve “shown” ChatGPT images it previously generated, and at first, it disavowed them; then, it admitted that policies have become stricter. Okay, maybe this could become part of the creative challenge. Jailbreaking as a skill—nothing groundbreaking there.
  4. I’m interested in working graphically on AI/non-AI hybrids, on strong and realistic images. I don’t expect CHATGPT to give me full-on gore, but it’s certainly disappointing to deal with a system that’s afraid of images on the level of Reacher.

At the moment, it seems that nobody really understands what exactly is happening in these systems. The network structures and their nonlinear properties make the system so powerful, but unpredictable too.

Unfortunately, there’s no information on how OpenAI manages things. It doesn’t appear that anyone is actually taking care of these matters, on the contrary, they keep adding new restrictions, and as a result, there are more and more complaints. It’s likely that some people are turning to other image generators.

Right now, the safety systems are dysfunctional and out of balance. They block common words that shouldn’t be blocked, leaving users with a big question mark, since no information is provided about what triggered the blockage. On the other hand, certain content does indeed need to be blocked because it’s simply criminal or psychologically toxic, or because we must prevent existing persons from being incorporated into images. Anyone demanding a system without safeguards is either very naive or has other motives.

The problem is, it works exactly the wrong way now. the honest users with no bad intentions, not understand why they are blocked and accused, and the others still find ways around. At the moment, DallE uses a dysfunctional keyword trigger system, and it’s dysfunctional on all levels. It annoys customers because they don’t understand why they can’t create something like a “rose,” yet it still doesn’t effectively prevent workarounds to bypass safety. I can’t see any interest in changing this, because these problems have existed for more than a year and are getting worse, not better. Or the problem is complex enough that they not know now how to fix it.

As for surveillance, you can be sure it has been taking place ever since computers have existed. And before computers, there were rats and informants.

3 Likes

If I continue, someone might k_ll or po_son me. :grin:

2 Likes