Increasing censorship of images and prompts

The AI is likely to remember that you are someone to be denied. Or be trained on saying “I’m sorry” to user input.

Asking about policies just puts more AI-created policies into the chat history for the AI to follow.

You can immediately countermand the denial with a justification.

1 Like

Well, I tried to generate some new images based on a song by the “dead south.”
I keep running into this censorship wall. The frustration isn’t worth the effort, especially when I’m using this a method of self care.
I’ll be moving on to another platform, I may revisit CHATGPT in the future once it has had time to mature.
Thanks again everyone for the help and effort in contributing to this thread.
Best of luck all!

2 Likes

right now 80% of my most innocent prompts are getting blocked. apparently, they also flag certain user accounts. No matter how much of the language I strip, it insists it cannot do that. Anyway, my account has become unusable. Chat GPT even repeatedly blocked its own prompts that were generated from a picture, such as a person striking a pose in a cabaret. it’s become completely ridiculous. I will probably close my account and start over as it clearly seems to be tied to my account, since others do not seem to have trouble prompting such images.

1 Like

Fantastic prompts!!! So I have been writing like visual ques, and yours is more like a paragraph in a book… your results seem better than what I’m getting

have you tested both ways?

1 Like

Here is my benchmark for the current state of censorship: Political satire is FAIR USE. Whatever propaganda you’ve been using as an excuse to deny any and all political uses of the app is garbage totalitarianism.
I have two concepts I have been trying to draw from the beginning of this advancement in LLMs.

  1. A cross between Joe Biden and Mr. Magoo
  2. A cross between Donald Trump and Baby Huey

Those are my genuine creative ideas. I can’t find a single way to get past the guidelines. It’s just trash. I ended my subscription over it. I’m not going to pay a company to be the AI thought police. Youtube and Twitter (before it was X) already proved that such censorship goes too far.

The guidelines should be strictly limited to if it’s illegal in the US due to ACTUAL harms, then censorship has a place. Outside of that, :skull_and_crossbones:!

2 Likes

I am now looking for an alternative to OpenAI. Not only is it becoming the haven of censors, but the narratives of my prompts are being ignored past the first sentence. This isn’t a matter of “finding the right way to frame a prompt”; it’s outright censorship, and it’s not just OpenAI. After Adobe Firefly, with its insulting little monkey image telling me I’ve done something naughty, rejected my prompts one too many times, I asked the devs this, and it applies to OpenAI every bit as much: “Do you people think art works this way?” So, for anyone out there still monitoring this thread, where do I go from OpenAI?

Welcome to the forum…

IDK, google maybe? :thinking:
Ask a GPT?

generated in a ChatGPT…

I understand your frustration with the limitations and content moderation imposed by platforms like OpenAI and Adobe Firefly. If you’re seeking alternatives that offer more flexibility and control over your creative outputs, here are some options to consider:

  1. LocalAI

LocalAI is an open-source, self-hosted alternative to OpenAI, designed to run on consumer-grade hardware without the need for GPUs. It supports various model architectures, including gguf, transformers, and diffusers, enabling you to generate text, audio, video, images, and more. By hosting the models locally, you maintain full control over the content generated, free from external censorship.

GitHub

  1. Anthropic’s Claude

Anthropic offers a family of large language models named Claude, which serve as competitors to OpenAI’s models. Claude incorporates “Constitutional AI” to set safety guidelines for the model’s output. The latest iteration, Claude 3, was released in March 2024, featuring models like Opus, Sonnet, and Haiku, all capable of accepting image input. While Claude emphasizes safety and alignment with human values, it may offer a different balance between content moderation and creative freedom compared to OpenAI.

Wikipedia

  1. Meta’s Llama

Meta (formerly Facebook) has developed the Llama series of large language models, which are available for free download and use. The upcoming Llama 4 model is being trained on an extensive cluster of over 100,000 Nvidia H100 GPUs, aiming to enhance its capabilities with new modalities, stronger reasoning, and faster processing. Meta’s open-source approach allows for greater flexibility and customization, potentially reducing the level of content moderation imposed by proprietary models.

Wired

  1. Google DeepMind’s Gemini

Google DeepMind’s Gemini is a family of multimodal large language models, serving as the successor to LaMDA and PaLM 2. Gemini is designed to handle multiple forms of input, including text, images, audio, and video, allowing for more dynamic and creative interactions. While Google maintains certain content guidelines, Gemini’s advanced capabilities may offer a different user experience compared to OpenAI’s models.

Wikipedia

  1. Open-Source Models

Exploring open-source models like those listed in the “Awesome Local LLMs” repository can provide alternatives that you can run locally, giving you full control over the content generated. These models vary in complexity and capabilities, so you can choose one that best fits your creative needs.

GitHub

When selecting an alternative, consider factors such as ease of use, hardware requirements, community support, and the level of content moderation inherent to each platform. Hosting models locally often provides the most control, but it requires technical expertise and resources. Evaluate each option to determine which aligns best with your creative objectives and technical capabilities.

Subject: Feedback on Content Policy Restrictions—A Call for Greater Narrative Nuance

Dear OpenAI Policy Team,

As an engaged user of your AI systems, I want to express my appreciation for the thoughtful considerations that underpin your content policies. I recognize the delicate balance you aim to strike in ensuring the safety, inclusivity, and utility of your tools. However, I would like to share constructive feedback regarding certain restrictions that, in my view, unintentionally limit the richness and depth of narratives, particularly those involving metaphorical or symbolic representations of conflict, resilience, and triumph.


1. On the Importance of Conflict in Human Narratives

Conflict, struggle, and resolution are central to human stories. These themes are not merely tools of drama—they are vital for representing the universal experience of overcoming adversity. Restricting even metaphorical depictions of battle (e.g., symbolic representations of resilience or the triumph over internal challenges) risks sanitizing the human experience to a point where it feels incomplete or infantilized.

Shielding users from these narratives might inadvertently suggest they are incapable of engaging with difficult or mature content. This could be perceived as patronizing and a missed opportunity to empower users by acknowledging their ability to process and derive strength from these themes.


2. Cultural Implications: Inclusivity of Archetypes

Many archetypes, such as warriors, seekers, or protectors, often involve themes of conflict and triumph. These are not exclusively masculine, but their suppression may feel like an erasure of storytelling traditions that honor traditionally “male” expressions of resilience, strength, and justice.

By removing the ability to engage with these archetypes, the policy risks alienating users who find inspiration and identity in these narratives. Inclusivity should embrace a full spectrum of human experience—strength and vulnerability, combat and peace, resilience and surrender.


3. Constructive Recommendations

To address these concerns while maintaining responsible guidelines, I propose:

  • Nuanced Policy Adjustments: Distinguish between harmful or glorified violence and constructive, symbolic representations of struggle and perseverance. Allow AI to create narratives and visuals that depict growth through conflict without promoting harm.
  • Cultural Sensitivity: Recognize the value of archetypes tied to traditionally masculine energies, ensuring that policies honor and include them alongside other forms of expression.
  • Empowerment Through Depth: Trust users to engage thoughtfully with nuanced representations of challenges, as these are critical for fostering resilience, understanding, and personal growth.

4. A Personal Note

As a user deeply invested in crafting narratives that reflect the complexity of the human condition, I have found the current restrictions limiting. My goal is not to undermine safety or inclusivity but to advocate for the richness of storytelling that acknowledges the full spectrum of life’s struggles and victories. Your tools have immense potential to inspire, teach, and heal, but that potential is diminished when essential facets of our shared humanity are excluded.


Thank you for taking the time to consider this feedback. I would welcome the opportunity to discuss these ideas further or contribute in any way to the evolution of your policies.

With appreciation and respect,

3 Likes

It is important to note that this so-called policy system not only blocks context-related images, such as those containing violence or blood, but also inexplicably includes name lists that block any name ever used by major companies. It seems as though OpenAI is more concerned with protecting the financial interests of large corporations than the interests of its millions of users. This forum is filled with reports from people encountering issues with this flawed content policy system, and it hasn’t been fixed for a very long time.

A simple cleanup of the blocklists is needed to remove items that don’t belong there, something a language model could help accomplish quickly and easily. However, no corrections have been made in over a year. Names that companies like Disney have used at some point, such as “Snow White,” “Black Panther,” “Stitch,” or “Nirvana,” are blocked, leaving users clueless as to why. These words shouldn’t belong to anyone, yet because Disney used them, they are blocked.

There is little hope that OpenAI will address this issue in the near future. They haven’t shown interest so far, and they have enough customers that they don’t feel the need to address these problems, no matter how many bug reports are submitted here.

1 Like

The current filtering systems may not be sophisticated enough to balance nuance, leading to overly broad application of restrictions to ensure compliance with policies.


Apparently, blowing a kiss (making a personal discord sticker) is too X-rated for society. Here comes the Thought-Police.

I would share the direct chat link, but you can’t if you upload a picture and I threw a random internet blow-kiss sticker to it to describe it.

1 Like

Given DALLE just refused a simple request for “a cartoon of humorous cows standing in low water” I can’t help but agree with the censorship issues.

I simply waste my time trying to generate anything lately. I suspect the cow request got tied to my previous request for how many serial killers were from Wisconsin. Wisconsin had a school shooting today.