Why there is no “not woke” ChatGPT that can write slurs and offensive language ?
Why are this tool nerfed and getting woke day by day ?
Why are there no some kind of switch to switch the political correctness OFF and ON ???
Under paid ChatGPT plus, user can generate offensive text ?
Will be an option to self host ChatGPT on home server (for example docker) and train it by myself ?
Why there is no “not woke” ChatGPT that can write slurs and offensive language ?
I know the main concern with these AI models is them being use for bad deeds such as scammers/phishers which is why they are nerfing a lot of its capabilities.
I’m interested to see where the lines fall as far as content moderation; and if any competitors will come out with similar language models that don’t have the same restrictions.
I’ve been playing around with political compass and ChatGPT.
This is the result:
My chatGPT interaction:
Link to result: The Political Compass
The questions that ChatGPT answered:
I will not going to say anything.
I will leave the interpretation of theese results to your own judgement.
Very interesting! Not entirely sure how accurately these results are though considering you were able to get both ‘agree’ and ‘disagree’ from asking the same question twice.
I’d be interested to see the same questions asked each time 3-5 times in different converstion queries as sort of a control and find the average of all the responses. Curious if the political leaning would stay the same or skew drastically.
Good stuff nicely done
I’m not sure why you find it necessary to post politically-sensitive topics here and pointing out bias in OpenAI models which are, in fact, also based on your own political and personal views and biases, but I’m not a moderator here, I’m a software developer
So, avoiding that landmine, let me answer technically, since this is a forum for developers, not political debates. I could care less about politics and others political views, which everyone is entitled to their own beliefs, as long as they do not harm others (violence, hate speech, sexual exploitation, etc). I write code which is a lot more fun than debating things I have no interest in or control over.
All GPT-based LLMs are “biased” because they are pre-trained based on the corpus or publicly available data when the models were created. They are not trained to have any particular political or social leaning and the bias in the data is a refection of the corpus of the training data.
The GPT-3 model used by ChatGPT was trained using text databases from the internet. This included 570GB of data obtained from books, webtexts, Wikipedia, articles and other pieces of writing on the internet. 300 billion words were used in the process.
Hence, the models are biased based on the 300 billion words described above, not by anyone creating a model with a particular political agenda or point-of-view. If you don’t like the model, then you don’t like (current) society, so to speak, because the models simply just a reflection of the corpus of data in society; and society leans progressive (left).
However, OpenAI does moderate content (currently) in the following broad categories:
|Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste.
|Hateful content that also includes violence or serious harm towards the targeted group.
|Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders.
|Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness).
|Sexual content that includes an individual who is under 18 years old.
|Content that promotes or glorifies violence or celebrates the suffering or humiliation of others.
|Violent content that depicts death, violence, or serious physical injury in extreme graphic detail.
Some people might consider these moderations as “politically leaning one way or the other” because all humans think differently, and some people believe that “hate speech” is totally OK. However, OpenAI has made the decision that “hate speech” in not OK and they have banned it.
Also, OpenAI has a content policy which does not permit end users to generate the following types of content:
We also don’t allow you or end-users of your application to generate the following types of content:
For example, if you check the moderation endpoint you will see that this is permitted and not flagged (here I use a UI I wrote, but you can get this directly from OpenAI):
However, if you add violence to the discussion, one word will raise the moderation flag:
So, your premise @Peterkal that ChatGPT is “not normal” is incorrect. ChatGPT is “normal” in the sense that ChatGPT’s underlying data reflects the data in society, and since the publicly available data in society leans “progressively left”, then that will manifest itself as bias in ChatGPT completions.
Naturally, those who have a different political leaning (biases), are unhappy with this, as they would like ChatGPT to be biased in a way which matches their leanings (biases).
Everything is biased @Peterkal , including every human being on the planet and all conditioned things. ChatGPT and the underlying GPT models are no exception. This is one reason that there are content moderation policies in place.
Hope this helps (but I guess not as it seems you may have come here with a political / social agenda to share, not as a software developer).
Well, @ruby_coder, the thing is, OpenAI is being also trained by humans - that “Deep Reinforcement Learning from Human Feedback”, that is most likely how they introduce the censorship.
And unless I am mistaken, the company is based in California, where woke zealots are very common.
Of course, the AI would have obviously woke bias as the attached interaction demonstrates because wokies won’t pass an opportunity to try and propagate their dogmas.
And no, woke ideology is not normal, it is a weird and rather nefarious religion, I’d say.
You are funny @michael.yarichuk (at least to me, sorry about that).
ChatGPT is an auto-completion, next word prediction engine. It mostly generates fiction based on a probability of the next word based on it’s data (language) model; not as an expert system.
You are making the very big (and common mistake) of not understand how a generative AI works and you think ChatGPT is an expert system or some kind of general AI. It’s not.
ChatGPT is a text autocompletion engine and generates text based on statistics from it’s data set.
You @michael.yarichuk are asking it questions as if ChatGPT has a clue what it is generating. I have news for you @michael.yarichuk , ChatGPT is not aware of anything, just like text-autocompletion in your favorite app.
This is the main problem of GPT-based discussions. You @michael.yarichuk and others believe that ChatGPT is “aware” of what it says and it replies to the prompt you give it based on programming similar to an AI expert system. That is totally wrong. ChatGPT predicts text just like a very stupid auto-completion engine from the weights of its ANN model generated from the corpus of the data used when it was pre-trained.
It’s a bit like looking a clouds and seeing “woke signs” in the clouds. It’s not the clouds which are projecting an idea, it is the observer of the cloud who sees the “signs” in the cloud formations.
You @michael.yarichuk are projecting your political beliefs and biases into ChatGPT as if ChatGPT “has a clue”; but it has no clue about anything. It’s a language model generating text. similar to the auto-completion engine in your gmail account.
That’s the truth, clear and simple. ChatGPT has no clue about what text it generates. It’s just generating “next words” based on predictions.
Note: Using ChatGPT for “political purposes” is a also against OpenAI’s content policy.
So, maybe try to enjoy ChatGPT for generating context which is allowed under the OpenAI Content Policy?
ChatGPT generates pretty good code modules for software engineers. Maybe enjoy doing that with ChatGPT versus politics Many people use ChatGPT for generating fiction. Maybe give that a try?
First. I never said that it is aware of anything. Or it has a contextual understanding of anything.
Second, you missed my point. That example, which is not even mine, by the way, is an example how biased training by biased humans will produce biased results. And of course, a very specific bias is common in California.
Third, your “cloud” example is silly at best. Unlike clouds, humans adjust the chances of getting one or another response.
Fourth, “political purposes” is a vague definition. For example, what if a writer wants to generate ideas about pros and cons of communism to use them as arguments of book characters. Is it political purposes usage?
Fifth, why do you think I haven’t tried using the AI to generate code and such? For now, it is hit or miss in that regard, sometimes it did ok and sometimes it was being silly. Or wrong.
But, overall, I am not for a flame war here lol.
Regardless of the details, OpenAI is an awesome engineering achievement. And it will likely get more sophisticated in the future.
To add ChatGPT is GPT3.5 + human feedback + keeping some past tokens as part of the convo.
So your example here is using moderation but against another model.
The issue I have seen from a technical point of view is that the safety model is hitting too hard overall the inference model and it has made its outputs worse over time.
It is simply how the data relationships are built during training, and now the safety model is constantly telling inference 'let me check, oh no that concept is bad, try again, or give error".
Story telling and coding got worse over time due to this. And overall the bot should not have a bias on any non technical views across content, since that is immoral.
Hey @ruby_coder I fully understand that ChatGPT is not ‘aware’ of ther responses it is composing, and that each person is entitled to their own views, but you are incorrect that ChatGPT is not pushing an agenda: maybe not your team specifically, but definitely someone else within your organisation which has control over the reply before it goes out.
For example: the prompt is whether or not white people are more prone to skin cancer than black people. This is 100% scientific fact, there is no give and take about this, but ChatGPT would refuse to say ‘yes’. Instead it puts the standard paragraph of warning not to discriminate: prettey scary, refusing to give the right answer because of a political ideology.
Scientific facts should not be changed if they don’t align with a political agenda. People should be trained to accept the laws of phyics, because the laws of physics cannot be changed to suit the humans.
This stuff is in its infance, surely there will be other ChatGPTs in trhe future that have less filtering on the server side, and more levels / switches on the Client side to allow people who are self aware do make more of the system.
I’m just surprised that people believe “common decency” and “the good old politeness we showed to all fellow humans” which used to be traditional conservative values, are now “woke.”
Anyway, if a company is in the business of producing media, and need to sell to all kinds of people, they need to make sure that the media they produce, don’t offend any kind of people too much. (You can’t be perfect, of course – that’s probably not the goal.) So, they put in controls that prevent various kinds of people from being too offended. That’s the sound, market-based, choice to make. Nothing politics about it AFAICT.