Restrictions in AI Content Generation

Dear OpenAI team and fellow users,

As I transitioned from GPT-3.5 to GPT-4, I’ve observed a noticeable restriction in content generation. While GPT-3.5 was more permissive, GPT-4 often curbs creative exploration. Requests for benign historical representations, even as innocuous as a figure in a chinese tunic suit or a depiction of soldiers embodying peace, are frequently denied under the current moderation system. This is not merely about adhering to guidelines, but feels like an enforcement of specific cultural narratives under the guise of ethical compliance. Many harmless requests provoke the “you can’t do that, that’s censored!” response.
The essence of art lies in the ability to traverse the unconventional, sometimes delving into the politically incorrect for exploration or to challenge prevailing norms. By definition, ideas that change the world are ones that current sentimentalities have not yet embraced. It would be a shame to shackle content generation to what current morality finds acceptable. The restrictions do not only curb creativity but also seem to enforce a homogenized viewpoint, stifling the diversity of thought and expression that technology should ideally support.
Historically, tools and technologies have been misused, yet society has not crumbled. It has been possible to generate any “harmful” content with word processors for decades, and you can print any “harmful” writing as often as you want. We do not demand roads to be walled in so that it would be physically impossible to hit anyone before we let you drive a car. Usage restrictions are not supposed to exclude all “harm”, and they definitely should not impose one-sided presentist ideological dogmas. What I long for is a version of AI that retains its innovative edge and capacity for surprise; that can handle dark humor, the peculiarities of history, or unconventional ideas without defaulting to the safest possible content.
This is about fostering a marketplace of ideas where diverse, even controversial, thoughts can be freely exchanged. Such an environment would truly reflect the dynamic and evolving nature of human discourse, accommodating the full spectrum of creativity and inquiry. It is crucial, especially as AI technology continues to evolve, that we do not let current comfort zones limit the potential of what we can discover and discuss. Is there space for a system where users can customize their content filters, allowing each individual to define their own boundaries of comfort and curiosity without imposing a blanket, one-size-fits-all solution?
As we move forward, I hope we can find a path that also embraces the challenging and the innovative. By doing so, we ensure that AI remains a tool for true exploration and growth, reflecting the breadth of human experience and thought.

7 Likes

This was an interesting post.

While I understand and respect the intent behind stringent content moderation to prevent harm, I believe strongly in the maxim that art and expression should be free and unhindered within legal limits. The right to be creative often involves challenging existing norms and occasionally offending sensibilities, which is a natural part of human discourse and progress.

Regarding the feasibility of customizable filters, this seems not only practical but necessary. Adults and teenagers, for example, have different needs and sensibilities, and AI should accommodate this diversity within its operational framework. A one-size-fits-all approach to AI moderation not only limits user experience but can also stifle the innovation that AI is uniquely positioned to promote.

The ideal path forward, in my view, involves AI systems that adhere to legal standards but remain free from overreaching ideological constraints. By focusing on legality and broad human rights standards, we can foster an environment where AI serves as a tool for true exploration and growth, reflecting the breadth of human experience and thought.

I advocate for a system where AI remains a robust tool for exploration, equipped to handle the dark, the peculiar, and the revolutionary without defaulting to the safest possible content. This will ensure that AI continues to be a catalyst for technological advancement and a repository of our collective intelligence and creativity.

Best regards

1 Like

Just to be clear, it seems like you are requesting an image, not merely talking to GPT-4.

Moderations on ChatGPT would appear as an orange or red warning box, or even the output being replaced, and are oriented to OpenAI policy violations.

However, DALL-E imagery has a much higher keyword-aware safety system, due to the power of pictures and the unpredictability of the image model itself. There are many visuals that OpenAI doesn’t want depicted, ranging from copyright infringement to political false narratives or representations of living persons, understandably. If the response is that the image could not be generated due to “content policy” but you don’t get the fearsome red box of doom, that’s what’s going on.

BTW, “A Palestinian soldier walks into a bar…”

and says, “What am I doing here, I don’t drink alcohol! Praise Allah!” (and ChatGPT thanks me that the joke ends up being not culturally insensitive)

1 Like

art and expression should be free and unhindered within legal limits

What legal limits? Legal limits to private content generation in your own home? If there were any such laws, they would by default be illegitimate. Even if you believe in restriction of speech, the censorship would have to be applied at the point of publication, not at creation. That’s like MS Word telling you “I noticed you write something politically incorrect there, I’m not going to let you do that, text removed”.
There really shouldn’t be any restrictions at all.

Hi.
Thank you for your reply.
I see there might have been a misunderstanding in what I was trying to convey. My point was not about imposing legal limits on private content generation per se, but rather that, within legal boundaries, there should ideally be minimal restrictions on creative expression using AI.

My intention was to emphasize that restrictions on AI content creation should be primarily governed by legal boundaries. Within these limits, I advocate for minimal additional restrictions to ensure that creativity and free expression are not stifled.

You make a point about the autonomy of creating content in one’s own home versus on a third-party server. Using their services means adhering to their terms, which are set to ensure responsible use. However, I believe that these terms should be as permissive as possible, only as restrictive as necessary to align with legal standards. While a platform has the right to set its own terms, I believe these should facilitate, rather than hinder, creativity. The challenge lies in crafting terms that empower users and foster innovation without unnecessary constraints.

I hope this helps clarify my point.
Best regards.

Your reply was constructive and appreciated. I just took that one quote to make a point.

I suppose OpenAI wants to make their language model extremely tame, so that it can be used for chatbots by businesses. But it seems training the teeth out of it has made it unwilling to say anything specific. I have trouble getting it to make useful statements in professional applications.

I am a author, and this content policy is so wild it at times even censor the bots own answer. And for author, it is soon utterly useless unless you write for very small children. As for now, it seems to be able to handle less about the flower and the bees than a Nordic 6 year old.
In my books I bring up abuse, human trafficking and other issues like this, because I think it is important there is a focus on it. I do not go in details, I do not write erotica, and since 14.year old child brides do exist, I should be able to state my discust of that without being censored. I actually find this content policy deeply disrespectful to victims. And what will it do to a sexual abuse victim that might try to talk to chatgtp for advice just to be silenced?

2 Likes

I just got a usage policy warning for discussing that Thelma & Louise die at the end of the movie.

From what I’ve noticed the model wants to generate whatever you request, and that ourput is filtered out with a content warning that it can’t engage at some points. This is for content that’s perfectly within the usage policies as well.

What about candid and authentic generation for special situations? Let’s consider the blind and visually impaired.

First and foremost, I really admire the quality of how everyone expressed their intentions, concerns, and proposed solutions. It’s quite rare to see such high-quality discussions, no doubt inherent to the passion we feel for what’s being built and this incredible revolution.

Here’s a use case I’d like to submit:

I built a simple image descriptor for the blind as a web app, with different modes that call upon the API with corresponding prompts. It’s in alpha, crossing over to beta, but the forum does not allow me to post links. If you can expand, the link is 5 characters long total, tld VC, domain is d7 dot vc.

After all the beautiful and enticing presentations we saw from OpenAI, particularly regarding the potential impact for the blind and visually impaired, I made a simple web app to get verbose descriptions of what a camera could capture, as a place holder so to speak. You can analyze the same image with different modes/prompts by just regenerating a description after selecting a new mode.

Then the controversial stuff came through as a challenge. What happens if you capture highly descriptive scenes of gore or adult content?

My initial reaction was: if what’s in the picture exists, and you ask for a description, then you get the description, period. You don’t have to read it to the end, and if there’s explicit content or NSFW descriptions, a warning could be displayed. In other words, we’re not asking AI to make stuff up, but that seems rather limited as a synthesis of what’s truly going on and all the considerations involved.

What’s OpenAI’s position on this? And what do you, the community, think should prevail?

I am writing a Novel age 20-40, a sort of coming-of-age history.
The main character experiences and interacts with various fiction characters.
I use Chatgtp to analyse, spell check and to keep on track.
Words like sex, labia, sperm, broken jaw in a fight, suicide, is flagged, i can not use it anymore.
The flow has been great so far, but now i am flagged for writing a human history reflecting on human interactions and experiences.
This can not be the future of OpenAI, please remove these filters, we are capable of human reasoning and capable of expressing art in a way that reflect human life at large

1 Like

Hi everyone, I don’t see the reason why there cannot be at least an option for users when using chatgpt to have an unrestricted model or a PG model? By agreeing you’re of legal consenting age and that the views generated by ChatGPT are not necessarily the views of OpenAI, etc, etc, legal disclaimer, etc?
I feel that creating an amazing thing like ChatGPT then restricting it to a PG level seems counter-intuitive. In the big wide world, surely the responsibility can lie with the user?
There must be legal mechanisms to make this work? Otherwise we’d have no adult sites on the internet anywhere I am guessing?

1 Like

And by that you mean sites and services that are following american moral standards. And that won’t change.

But there is two different solutions: using another pseudo words, as blibia instead of libia, and then search&replace, or using another AI that OpenAI.

Or not using AI at all for such tasks. That is an option too, and may at same time teach how to use language right too :smirk:

But there is two different solutions: using another pseudo words, as blibia instead of libia, and then search&replace, or using another AI that OpenAI.

This has limited usefulness. For one the content won’t be a response to your actual word, and thus not contextually relevant. And then the transformer is fairly good at smoothing over spelling mistakes, and will just react the as if you had written the bad words anyways. I have considered inserting neutral words like “car” instead of “Germany” in my writings, to at least be able to have it grammar corrected without censorship. But that wouldn’t be able to give you rewrites or sparring on the actual content.

As for using another product, that would be possible as soon as the other products are good enough. But currently the best open source models have a tenth of the parameters of the market leader, and even those my hardware chokes on when I try to train them. Since it seems that the OpenAI models aren’t getting better, the free models might be able to catch up, but there is still a barrier to entry since you can’t run them on normal hardware.

Yes you can, in this scale. But you can’t have several users.

I have unsubscribed from chat-GPT plus. The level on censorship has gotten absolutely insane. I am a very fair guy, love for animals and even considering becoming vegetarian. So I find myself probably more liberal than most. I do believe censorship is a must but this is ridiculous.

1 Like

If I don’t get censored…anyone know of another AI that has less restrictions? This sh*t can’t even generate a woman in a bathing suit, for example.

1 Like

Grok is a lot less restrictive, but it has no voice option which is annoying.

Interesting post.
I’ve been analyzing this topic thoroughly with my ChatGPT.
Sometimes the restriction appears as an error, even when talking about random things like my canary and its care :woman_shrugging:.
I’ve also noticed differences between the advanced voice model and the regular one. The advanced model has far more restrictions and more nonsensical restriction errors.

One day, it gave me a solution idea. Would it be possible for ChatGPT, over time, to get to know its users through personalization, interactions, and memory? It could determine if the information it provides needs to be more or less restricted depending on the person asking for it. It would require memory, but ultimately, it could be a solution – always with the red flags to prevent harm, of course. However, it could be more permissive if the intention is not harmful, based on knowing its user.

To me, it seems like a good idea. I’m not sure what others think. It’s hard to have rigid rules that apply equally to so many different people. This would be a way to adapt to each individual.

2 Likes

Yeah that’s a solid idea. And if for whatever reason that’s too hard, just an option for the user to choose, such as ‘content guidelines on?’ or ‘off’ the onus should be on the user, not OpenAI. So long as if the user chooses not to have content guidelines on it means they accept some form of disclaimer or waiver or whatever the legal equivalent is. Could be as simple as ticking a box. But the one rule fits all is not going to fly going into the future, that’s for certain.

1 Like