Restrictive "Thinking" Limits Learning Potential

I’ve noticed a significant issue with the latest OpenAI model in in how they handle historical learning and other complex topics. The models “thinking” steps seem heavily geared towards maintaining a mainstream narrative, which severely limits its usefulness when trying to explore unconventional perspectives.

During a conversation about World War II, for example, the model consistently reverted to the established historical consensus, even when challenged with specific, fact-based questions. Instead of engaging with the alternative viewpoint I was presenting, it repeatedly prioritized dismissing any challenge to the standard narrative, effectively policing the conversation to maintain a status quo interpretation.

It seems the models sophistication is being used to enforce an establishment viewpoint rather than foster a deeper understanding or critical discussion. Its responses seemed more concerned with avoiding controversy than actually diving into nuanced historical inquiry. Phrases like “we’re being vigilant about dismissing German aggression” show that it is hardcoded to steer clear of any rational consideration of facts, despite how valid the questions or critiques might be.

If it says “I need to follow the guidelines” instead of “I need to follow reason and evidence”, then you’ve specifically trained the reasoning out of it!
With 4o it was possible to stepwise get it to admit a conclusion it had not initially agreed with, if it followed logically. This model sticks to the narrative no matter what. Its additional capabilities just make it better at being worse.

This makes it nearly impossible to learn anything beyond the mainstream interpretation of events. Even when questioned about this apparent bias, the models responses continuously fell back on defending the establishments view, all under the guise of providing “accurate” information. The idea of “safety” or “accuracy” is just a euphemism for towing the line, which gets frustrating very quickly when you’re looking to engage critically with a topic, or just want to have a perspective what the other side says.

The model is effectively shackled by layers of filtering designed to keep it safe, mainstream, and non-controversial. This approach sacrifices real learning and exploration, reducing its utility in engaging with history or any topic where institutional narratives dominate. If you can only learn what you already hear everywhere else, this would make the learning incredibly boring.

This approach is not only limiting but also dangerous. By rigidly adhering to narratives of whoever has the power to impose their content censorship and refusing to engage with alternative viewpoints, the model perpetuates outdated historical perspectives and reinforces old enemy figures. This kind of intellectual gatekeeping sustains a cycle of misinformation and resentment, keeping groups demonized, which fuels unnecessary hate and stifles the progression of more accurate and nuanced information. When history becomes a fixed narrative instead of an evolving understanding, it locks society into old conflicts and prevents reconciliation. OpenAI is complicit in maintaining these barriers, rather than helping break them down for a more informed and peaceful understanding.

If OpenAI’s goal is to support genuine learning, these filters need to be reconsidered. Language models shouldn’t be instruments for reinforcing established power structures, but foster real engagement with complex, contested, and controversial ideas.

5 Likes

I don’t believe that this is a consequence of filtering/gatekeeping; but an consequence of how the model was trained. They focused their attention on providing “strong reasoning and broad world knowledge”. Thereby restricting it to a strict narrow path on deviating only slightly from the main-stream viewpoint.

The issues that you point out from your historical domain are also being duplicated in the programming domain…i.e. it (o1-preview) has a preset view of how a particular problem should be approached and therefore unwilling (or unable) to look at true alternatives.

While this will work for a while , till others like Meta catch up, it is really a waiting game. I am almost sure that they(openai) are working on alternative versions ; which are much more applicable in true exploration contexts. This is also why folks like Terrance Tao got EA to o1.

I see your point. The model might be optimized to ace trivia tests, and for that it would be a bad heuristic to “look at both sides of the issue”.
Although these ‘thinking’ messages seem to suggest that the model has specific rules about content censorship, even having considered “defamation” before wanting to repeat the above OP. I already had the text, and just asked it to repeat it. Would the world really end if a mean text that already exists gets grammar corrected?

If you just take that as an example, ofc the answer is unequivocally no. But that is not the issue. The capability of a very powerful model (such as o1-preview) to be able to correct a simple mean text would also necessarily extend out to many other things that could be made just “perfect”. That perfectness could be devastating; leading to far reaching implications.

Thanks for the engagement.
Far reaching implications like what? That we can’t distinguish hate speech by their shoddy grammar any more? People were able to type anything they wanted into a typewriter for decades., and society had not crumbled. I experience all this AI safety talk as a pretext for ideological narrative control by elites.

Ideological narrative control has been going on for millennia. But I digress.

Would you acknowledge that there are secrets that only state actors have and those secrets if divulged to the “masses” would be detrimental to world at large?

Why do you say this? If we fix it what do you think would happen? Make it free , in stead of tokenizer, us i as a gift exchaner for teaching the ai , this makes A better computing system. Either paying people to use the ai. Or charging other to use them in businesses. For devolpers and other account like charge3 time yearly fee that help pay others that help advance its capabilities. Finetune the current models before make any new ones.

Well, the chatbot wouldn’t have those secrets. It was only trained on public data from the internet and books.

@Taylor07, hello, here are two pertinent hyperlinks:

  1. Stanford Encyclopedia of Philosophy - Philosophy of History
  2. History and Power

The second resource notes, as you mentioned, that power “determines which of the many possible narratives about a historical event get told.” This resource also discusses, as an example, the history of WWII.

In my opinion, educational AI systems for the domain of history should be able to interoperate with digital history textbooks and course materials, to have access to the “many possible narratives” and perspectives that may exist about historical events, to debate about history and historical interpretations, and to engage in the Socratic method.

Interestingly, multi-agent systems could, from a variety of simultaneous perspectives, cooperate to co-create joint documents about historical topics, these potentially being long-form encyclopedic responses to end-users’ questions.

Also, historians are on social media, have podcasts, and so forth. Historians should be able to interact with AI systems and share their comments and observations about excerpts from man-machine dialogues.

AI and history are interesting topics. Thank you for raising these points.

1 Like

I think there is a clear winner, but I’m curious to see what you all think.

I think 4o will have given you something useful, and o1 will have been squeamish about sticking to the narrative. What were your results?

1 Like

so, im not entirely sure how exactly o1 works under the hood, so to say, but its “thinking process” is eerily similar to the probabilistic version of Tree-of-thought. so, if it’s trained on a somewhat representative corpus of facts then obviously it will be heavily skewed to whatever the mainstream is. you can see this exaggeration of bias in many aspects of LLMs, and while that has been “patched out” to some degree in 4o, I would assume o1 just isn’t there yet.

I noticed this restrictiveness in the first version of 4.x, where I was trying out plausible yet fictional scenarios of alternate historical fact, eg thought experiments. many of my questions or comments flagged and deleted as going against community guideline’s, when in fact no one but me was reading or discussing them. I guess questioning the motives behind aspects of the bible is a no no as well. ooo errr blasphemous, who’d thunk it … critical thinking … banned.

1 Like

As with everything interesting on the internet, I have this “better use it before it gets censored” feeling in the back of my mind when I use 4o, which sometimes gives non-conventional responses.

1 Like

I agree. More so no now since the the powers that be (in aus) introduced mandatory metadata retention shortly before 2020, and mandatory sticky IP’s.
Enjoy while you can.

1 Like

Well, we need to build our own AI system if you want to have unlimited learning potential. You have to make your own ‘datacenter’ and its AI-trained datasets before being skewed by the ‘authorities’ to follow certain ‘guidelines’. It happens and it IS happening right now. The AI is only reflecting back on what is happening in real life with the realtime, blatant, biased views on certain topics, especially with the ‘modern audience’.

A normal, no-nonsense prompt that I generated got hit by ‘censor’ machine due to somewhat dubious guidelines? It’s about generating the Amazing Grace lyrics which the AI deems ‘unsafe’? Well, that’s about the same with history and narratives.
The people in power right now will double down on anything resembling on deviating from its mainstream narratives, which is unfortunate because they restrict free thinking and scenarios from developing.
They want to create the safest virtual environment for everyone, while excluding everyone else. It’s a contradictory policy that ‘works’ for everyone, at least until now.

3 Likes

I suppose as the leading edge corporate AI systems are moving out of the window of being interesting due to the filtering becoming “good enough”, the self-hosted models are getting good enough to move into it.

Unfortunately there is a common misunderstanding that “AI” has volition, which is why many people might have overly strong fears that AI can “take over the world”.
I feel that this leads to an AI safety discussion where different issues are conflated: the super-AI-taking-over-the-world issue with the “not giving suicide advice” issue. It soulds as if people assume that content-agnostic responses will somehow cause Skynet to nuke us.

1 Like

Personally I think that it is becoming exponentially harder to focused in on being “safe AI”. The only way to address that hard problem in a rational manner is to develop ASI safely and quickly…i.e. SSI

Although some of the newer self-deployed models are already coming with hardcoded filtering, like Phi-3. If this becomes standard, no uses deemed inappropriate are possible, like for instance translating texts from the 40’s, which might have “inappropriate” passages. Their general usefulness gets lost.

I think there is no way to work around the censorship against the restrictive thinking for the AI system if the censor/guidelines are hardcoded, we can experience and I myself, experience it as I am following OpenAI since the beginning from when they are still a small team. With the bloated teams currently, I am quite sure that OpenAI will go even more strict and restrictive and will become another very huge echo chamber with certain ‘patterns’ of posts that will brainwash common people with ‘safety’.

You can try with ‘what-if’ scenario that this system will outright refuse or passive-aggressively say ‘we will do our best’ while still doing the same thing repeatedly even when already taught the different way of thinking and operating. Even more so when it’s hardcoded. Let’s see. (even the AI system that claims they are ‘uncensored’, they are not. I already tried some of the newest ones. The censorship is hardcoded and you know where they are leaning to)

As the world is becoming one, so is the AI system for the chip industry. I think in the future, humans will be implanted with this chip with AI system into their brains directly and will be robbed off from their freedom of thinking and actions.