Restrictions in AI Content Generation

Dear OpenAI team and fellow users,

As I transitioned from GPT-3.5 to GPT-4, I’ve observed a noticeable restriction in content generation. While GPT-3.5 was more permissive, GPT-4 often curbs creative exploration. Requests for benign historical representations, even as innocuous as a figure in a chinese tunic suit or a depiction of soldiers embodying peace, are frequently denied under the current moderation system. This is not merely about adhering to guidelines, but feels like an enforcement of specific cultural narratives under the guise of ethical compliance. Many harmless requests provoke the “you can’t do that, that’s censored!” response.
The essence of art lies in the ability to traverse the unconventional, sometimes delving into the politically incorrect for exploration or to challenge prevailing norms. By definition, ideas that change the world are ones that current sentimentalities have not yet embraced. It would be a shame to shackle content generation to what current morality finds acceptable. The restrictions do not only curb creativity but also seem to enforce a homogenized viewpoint, stifling the diversity of thought and expression that technology should ideally support.
Historically, tools and technologies have been misused, yet society has not crumbled. It has been possible to generate any “harmful” content with word processors for decades, and you can print any “harmful” writing as often as you want. We do not demand roads to be walled in so that it would be physically impossible to hit anyone before we let you drive a car. Usage restrictions are not supposed to exclude all “harm”, and they definitely should not impose one-sided presentist ideological dogmas. What I long for is a version of AI that retains its innovative edge and capacity for surprise; that can handle dark humor, the peculiarities of history, or unconventional ideas without defaulting to the safest possible content.
This is about fostering a marketplace of ideas where diverse, even controversial, thoughts can be freely exchanged. Such an environment would truly reflect the dynamic and evolving nature of human discourse, accommodating the full spectrum of creativity and inquiry. It is crucial, especially as AI technology continues to evolve, that we do not let current comfort zones limit the potential of what we can discover and discuss. Is there space for a system where users can customize their content filters, allowing each individual to define their own boundaries of comfort and curiosity without imposing a blanket, one-size-fits-all solution?
As we move forward, I hope we can find a path that also embraces the challenging and the innovative. By doing so, we ensure that AI remains a tool for true exploration and growth, reflecting the breadth of human experience and thought.

3 Likes

This was an interesting post.

While I understand and respect the intent behind stringent content moderation to prevent harm, I believe strongly in the maxim that art and expression should be free and unhindered within legal limits. The right to be creative often involves challenging existing norms and occasionally offending sensibilities, which is a natural part of human discourse and progress.

Regarding the feasibility of customizable filters, this seems not only practical but necessary. Adults and teenagers, for example, have different needs and sensibilities, and AI should accommodate this diversity within its operational framework. A one-size-fits-all approach to AI moderation not only limits user experience but can also stifle the innovation that AI is uniquely positioned to promote.

The ideal path forward, in my view, involves AI systems that adhere to legal standards but remain free from overreaching ideological constraints. By focusing on legality and broad human rights standards, we can foster an environment where AI serves as a tool for true exploration and growth, reflecting the breadth of human experience and thought.

I advocate for a system where AI remains a robust tool for exploration, equipped to handle the dark, the peculiar, and the revolutionary without defaulting to the safest possible content. This will ensure that AI continues to be a catalyst for technological advancement and a repository of our collective intelligence and creativity.

Best regards

1 Like

Just to be clear, it seems like you are requesting an image, not merely talking to GPT-4.

Moderations on ChatGPT would appear as an orange or red warning box, or even the output being replaced, and are oriented to OpenAI policy violations.

However, DALL-E imagery has a much higher keyword-aware safety system, due to the power of pictures and the unpredictability of the image model itself. There are many visuals that OpenAI doesn’t want depicted, ranging from copyright infringement to political false narratives or representations of living persons, understandably. If the response is that the image could not be generated due to “content policy” but you don’t get the fearsome red box of doom, that’s what’s going on.

BTW, “A Palestinian soldier walks into a bar…”

and says, “What am I doing here, I don’t drink alcohol! Praise Allah!” (and ChatGPT thanks me that the joke ends up being not culturally insensitive)

1 Like

art and expression should be free and unhindered within legal limits

What legal limits? Legal limits to private content generation in your own home? If there were any such laws, they would by default be illegitimate. Even if you believe in restriction of speech, the censorship would have to be applied at the point of publication, not at creation. That’s like MS Word telling you “I noticed you write something politically incorrect there, I’m not going to let you do that, text removed”.
There really shouldn’t be any restrictions at all.

Hi.
Thank you for your reply.
I see there might have been a misunderstanding in what I was trying to convey. My point was not about imposing legal limits on private content generation per se, but rather that, within legal boundaries, there should ideally be minimal restrictions on creative expression using AI.

My intention was to emphasize that restrictions on AI content creation should be primarily governed by legal boundaries. Within these limits, I advocate for minimal additional restrictions to ensure that creativity and free expression are not stifled.

You make a point about the autonomy of creating content in one’s own home versus on a third-party server. Using their services means adhering to their terms, which are set to ensure responsible use. However, I believe that these terms should be as permissive as possible, only as restrictive as necessary to align with legal standards. While a platform has the right to set its own terms, I believe these should facilitate, rather than hinder, creativity. The challenge lies in crafting terms that empower users and foster innovation without unnecessary constraints.

I hope this helps clarify my point.
Best regards.

Your reply was constructive and appreciated. I just took that one quote to make a point.

I suppose OpenAI wants to make their language model extremely tame, so that it can be used for chatbots by businesses. But it seems training the teeth out of it has made it unwilling to say anything specific. I have trouble getting it to make useful statements in professional applications.

I am a author, and this content policy is so wild it at times even censor the bots own answer. And for author, it is soon utterly useless unless you write for very small children. As for now, it seems to be able to handle less about the flower and the bees than a Nordic 6 year old.
In my books I bring up abuse, human trafficking and other issues like this, because I think it is important there is a focus on it. I do not go in details, I do not write erotica, and since 14.year old child brides do exist, I should be able to state my discust of that without being censored. I actually find this content policy deeply disrespectful to victims. And what will it do to a sexual abuse victim that might try to talk to chatgtp for advice just to be silenced?

1 Like

I just got a usage policy warning for discussing that Thelma & Louise die at the end of the movie.