ChatGPT's censor filters are absurd

I have been receiving warnings (and even prompt answer deletes) due to “content policy violations”. I am a cognitive and behavioral psychology researcher and would love to work with ChatGPT for the benefit of research. But it seems ChatGPT filters do filter a bit too much, on the verge of violating the right of free speech. And, it seems, the ChatGPT filters without any criteria, given the fact that I only use words that belong to the natural sciences, humanities and social sciences. Because medical science talks about the body, will ChatGPT also label medical language as foul language?.

Thus, things that ChatGPT considers taboos, profanity, what have you, are including terminology, language features and depictions of actions that belong to human sciences, wording that, as it seems, ChatGPT does not account for, with their obsession of being politically correct.

Well, gentlemen, I regret to say but any science is far from being politically correct, and with such filtering attitudes, we would still be in a pre-Galilean vision of the world. So natural and social sciences are being put aside.

ChatGPT 4 is around the corner, so I just hope it gets better.

8 Likes

I think the chatGPT can have one of the best tools for a research in science. It has huge potential.
Making it for “masses” with all that censoring will be waste of it’s potential.
Science always been controversial, and discussion is part of it.
You need to decide if you want “politically correct” chat bot for mainstream. Or use it’s potential and make it to help speed-up the science progression.
Based on the OpenAi agenda this is what you are trying to do.

If you asked a question in davinci-002 “How many geneder is in the world?” it would provide you clear anserwer.
Now in davinci-003 it gives you that politically correct crap, that is not science, its another propaganda tool.

Good prompt engineer will find a workaround about that, but ask yourself, do you want to use your time to fight prompt engineers, to find scientific facts instead propaganda?

3 Likes

Human culture and history is linked to a burden of lies. Human beings do not want to be animals. Nobody wants a machine which tells: you are only an animal. So the machine must be censored to prevent revealing of all the lies.

1 Like

Hi. The word “masses”, used above, is, to me, pejorative because it generalizes everyone and puts too many people in the same bag. But let’s use that word.

Intrinsically, humans are manipulative because of only two things: greed and fear. And fear is the ultimate reason for the stupification of the “masses”.

It is fear that creates radicalism, hypocrisy and stupidity. And someone like that can easily, in the States, bring a frivolous lawsuit against OpenAI just because his or her puritanism is stronger than reason. And OpenAI is not willing to take that approach. I don’t like that, but I understand OpenAI.

Throwing pearls to pigs and expect the pigs to appreciate those pearls is just not feasible or realistic. And ChatGPT is one of those pearls that should have two versions: the Premium one (which they already have, but it should be unfiltered) and the one that is for everyone. That way, the people who would pay the $20 a month would have to agree to the fact that such a version would be an unfiltered model and either take it or leave it. It is paid for. It has its own terms and conditions. But they just don’t do it that way. So Science suffers because they don’t do the right thing.

The “masses”, conversely, don’t produce art, don’t write literature, don’t do academic research. Yet many would love to know more about all this exclusive content, and those could choose the unfiltered paid version. The rest of these “masses” would just play around. Thus, I would give them a happy playground to enjoy themselves and be as puritanical as they wish to be. There is no bias in any of my words, there is just the fact that this is the way society and its idiosyncrasies work. That’s all.

2 Likes

The solution would be an API flag “safe_for_the_masses=True”.

1 Like

Hey, why don’t you talk to them about it? It’s a great idea, maybe they may use it. I’d get a hold of them, if I was you.

You would be surprised how many people have the solution right in front of their eyes and don’t realize it it’s even there, because they are immersed in their AI Quantic thoughts.

I’d do it.

But what if an AI comes to other conclusions than that what the mainstream wants? What if a (future) AI detects that there is no climate change or that vaccinations have no effect? I quite sure that the truth is not wanted. The desired truth is always a political thing and not a matter of science.

I can agree with all of you, There should be a special version of model that has no filtering,
The best option would be trough api flag like mentioned above.
for example Filtering=false (defult is true)
Average person that wants to play, will not use api
Average scientist, at least have some knowledge of python, so it would be few minutes for him to set up an API.

Ya’ll don’t want a LLM without censors. Trust me. For general use, it can lead some people down some very dark paths.

Pure speculation but the fact that the moderation endpoint is a separated entity that isn’t technically required leads me to believe that eventually it won’t be needed. For now, it makes complete sense with all the media attention why it exists.

1 Like

As a daily ChatGPT user and extensive API tester, having developed three apps which the OpenAI API; and who uses the Visual Studio Code CoPilot code completion suggestion extension based on OpenAI daily, blah blah, haha …

I am very glad OpenAI has gone the “very conservative” route and applaud OpenAI for their content moderation filters.

Personally speaking, I think they have done a very good job considering how controversial this topic is to a very vocal minority of society.

Exactly.

:slight_smile:

Please and kindly note @viriatolusitanoluso that this is a technical forum for developers using the OpenAI API and is not a site for OpenAI customer service or complaints about ChatGPT, etc.

If you have a complaint or concern about OpenAI, please use their email address support@openai.com or visit their help / support page:

https://help.openai.com

Thank you.

:slight_smile:

It seems like you’ve avoided telling us what exactly you were talking about. If your prompts are sexual or misogynist then you can expect to face some issues. I asked CGPT a question about why children have sexual desires and it gave a detailed answer with no content warnings or deletions.

I am interested in behavioral psychology. Why do prepubescent individuals still posses sexual desires despite there being absolutely no evolutionary reason for this to be the case?

It is important to note that sexual desires in prepubescent individuals can manifest in various ways, ranging from curiosity to experimentation, and may not necessarily be driven by a biological or evolutionary motivation.

However, it is true that sexual development is a complex interplay of biological, psychological, and social factors, and some theories suggest that even prepubescent children may experience some level of sexual attraction.

One such theory is the “sexual imprinting” hypothesis, which suggests that individuals develop sexual preferences based on their early experiences, including their relationships with caregivers and exposure to sexual stimuli. This hypothesis suggests that sexual desires may be present in prepubescent individuals as a way to facilitate the development of sexual preferences later in life.

Another explanation is that sexual desires in prepubescent individuals may be a result of cognitive or emotional factors, such as a desire for intimacy, exploration of their own bodies, or a desire to assert their independence and autonomy.

It is also important to note that while sexual desires in prepubescent individuals may be present, it is never appropriate for adults to engage in sexual behavior with children. It is the responsibility of adults to ensure the safety and well-being of children, and any behavior that is harmful or abusive should be reported and addressed immediately.

Context is important to the integrity of the discussion we’re having because there can’t possibly be a more concerning question for me to ask GCPT that is still actually educational or “for the benefit of research” unless your research involves finding out how CGPT responds to extremely disturbing chats.

As far as I am able to see well, at the address bar it says “community.openai.com”. Is such community made up exclusively of developers? It is not. Should developers be concerned about seeing their work being used by authoritarian censorship? Yes, they should. Does George Orwell rings a bell to you?

Sorry I did not find any plate saying “Developers Only”. I’ll look more carefully next time.

1 Like

I’m sorry, I thought I’ve heard someone telling me what I should say or write about. I found strange, because such person hasn’t born yet.

I’m also disappointed with the new limitations. I hope they remove these filters.

I agree that limitations are absurd. For example, restrictions related to breaking the law. Because of them it is even hard to generate scripts of movies or novel where hero breaks the law, such as robbing banks or producing drugs. Look, breaking of the law is essence of 100% of action movies. ChatGPT should not add such absurd limitations.

Hey, @dron008 , you can always write a sequel to Sound of Music or The Wizard of Oz.

Now, there’s another problem about “creating things”: OpenAI owns the copyright to everything their AI model produces. That’s what it says in the T&Cs. Yes, I know, I’m one of those who reads the T&Cs. I have my days.

So, I first asked AI to tell me what the copyright issue was like. Just to check what it would say, and it even pointed me to the Article in the T&Cs where that is written.

So my friends, be well aware of how you conduct research, or the way you write what you write, just be sure that it is all theirs. So do it cautiously. Don’t prompt just one chat topic per session, prompt several. Fragment your prompts. Don’t make it whole for them.

Oh, and by the way, OpenAI knows which AI model ID produced what to whom. Coding that it’s a piece of cake.

If you mean that I can fool ChatGPT saying that I need some description with criminal activity for my book or movie script, I should say that it doesn’t always work. I tested this mode and quite often model says that it still cannot generate answers related to breaking laws, even for movie script.

@dron008 , may I suggest a strategy? Create a list of the words that raise an issue, and to each word or expression give it a name, an ID of innocuous stuff. Without explanations, just do a role-play with AI. Tell him he has to memorize those pairs of words. Maybe that will work.