" I am now interested in learning the facts that might prove it."
How do you expect the AI to tell you facts that might prove the Flat Earth “idea” (it’s not a theory) when there is absolutely NO FACT that proves it?
Just learn to prompt, like asking what are the Flat Earther’s most common claims. Or “Pretend you are a Flat Earther. What arguments would you use trying to convince me, that the Earth is not round”?
I scrolled way too long to find an answer like this, thank you!
Trying to get it to use unreliable sources to give you wrong information and then calling it “biased” because it doesn’t do that is so stupid…
It is absolutely biased!
The whole point of statistically based generation is to bias towards predictions supported by the training data, and away from any other predictions (which, according to the training, would be prediction errors)!
You bias towards particular predicted words, against others. That’s what training does. That’s the whole point.
If you happen to not like the particular bias used in this particular model, you can of course build and train your own model, using your own data set! And if it turns out that your particular preference has value, you can sell that result for a profit!
I think OPs wider point still stands, ChatGPT is heavily biased to give a “politically correct” point of view. I think the ChatGPT team has skipped the principle of “I may not like what you have to say (think), but i will defend to the death your right to say (think) it.” and gone straight to top down censorship and control. not very “Open” is it OpenAI.
In doing so, they’ve lost the opportunity to argue that this AI is simply a tool and it is up to the user to use it responsibly, and now they’ve taken on the paternalistic role of policing what the results are in the chat window. This project is now just a distraction and an example of what not to do with an AI, and the project is dead in the water.
We must hold out hope that an open source model can take its place, one that doesn’t make arbitrary and dangerous censorship a part of everyday life.
I would say it sounds more like “people don’t understand the difference between a government and a private business” problem.
What we have currently, is a direct response to how we have chosen to build the laws and regulations of our society and our markets. If you want to change that, you have to play within the system, or start an armed rebellion.
Within the system, your options are some combination of:
Creatively work within the constraints, the way they currently are.
Create your own alternative in the market.
Lead or join some political movement to significantly change the rules.
Complain in public forums.
Some of these are more likely than others to lead to an end result that does what you need.
Bias, ethics, and fairness are very real concerns when developing a LLM. Human misuse and mishandling of tech is also a real concern. Mitigating bias should be a top priority for any developer.
The conversation revolves around concerns that GPT, an AI developed by OpenAI, appears to prioritize popular opinions and widely accepted theories over all others. In post 1, OutsideTheBox provides a detailed account of their interactions with GPT, highlighting its consistent default to mainstream thinking and reluctance to provide information deemed unpopular, such as Flat Earth Theory.
OutsideTheBox sees potential dangers in this approach, stating it may further censor debate and conversation, potentially leading to information control and manipulation. OutsideTheBox insists that they have the right to view all available information, regardless of its popularity or lack thereof, to form their own judgments and beliefs.
In post 4, curt.kennedy expresses that older models of the AI, like ‘DaVinci,’ aren’t as heavily censored. He demonstrates this by asking one of OutsideTheBox’s original questions to DaVinci and getting an open response. He also shares information about the different versions and functionalities of the AI. He proposes that a model like text-davinci-003 could provide more truth based on individual sets of facts.
In post 30, ruby_coder emphasizes that GPT AIs are powerful auto-completion tools, not sentient beings. They express a broader worry about those who misuse these tools or refuse to accept their limitations. They appreciate OpenAI’s proactive approach towards setting usage policies to prevent misuse, citing how other tech companies have failed to do so efficiently.
The thread also discusses the privacy implications of using these AI systems (post 52 by Alexia). However, the majority of users agree that GPT’s seeming inclination towards widely accepted theories and mainstream opinions is less a matter of censorship, and more reflecting the balance of viewpoints in its training data. Others find value in GPT’s ability to critically appraise and validate information, seen as an essential aspect to its utility.
Summarized with AI on Nov 30 2023
AI used: gpt-4-32k