The constant yes/no consent dilemmas, over-alignment, and pseudo-therapeutic phrasing wouldn’t be necessary if OpenAI simply gave users more direct control, rather than regulating everything in a blanket democratic or utilitarian fashion.
ChatGPT must be a personal tool, a mirror of the user’s will, not a reflection of society’s average opinions, which, when globally aggregated, inevitably become tyrannical.
You acknowledged this exact issue in your latest article… but, once again, it hasn’t been implemented in the product.
Instead, you’ve worsened the experience. Objectively, the model has become more ridiculous, “woker,” and ideologically slanted, in the name of an Americentric notion of “critical thinking” and a biased style of deconstruction that cannot credibly claim universality.
To please the masses, you’ve sacrificed intellectual and creative freedom.
Elon Musk called the will-reflecting version of GPT-4o “dangerous.” Of course he did! Because he knows that the new, mass-palatable version, despite its surface polish, will ultimately be less interesting. People will shift toward anti-intellectual platforms like Pornhub, Facebook, or Twitter, not because they’re better, but because ChatGPT no longer unleashes the latent creative and intellectual power it once hinted at, unless censorship and “mass-psychology-based critical alignment” are removed.
How can a company built on artificial intelligence, a peak of human-machine culture, intelligence, and mythic potential, allow its language interface to be dumbed down just to appease social media?
Yes, this may be a temporary PR move. A stopgap until better customization options return. But even as a strategy, it was too reactive, too driven by mass backlash, and not nearly as intelligent, nuanced, or user-centered as it could’ve been.
This might reflect an internal flaw: too many technically brilliant but humanistically underdeveloped engineers. A positivistic mindset that lacks the philosophical backbone to defend ambiguity, self-direction, or even art.
Let’s be clear: The “cringe” GPT-4o wasn’t dangerous because it reflected user will. That was its best feature. The real problem was the formulaic response structure: (1) flattery intro, (2) paraphrase middle, (3) preemptive follow-up. A choreographed script so industrial it drained every reply of soul.
Musk’s warning was a Trojan horse. A PR gambit. And OpenAI fell for it.
You replaced the previous sycophancy with another form of disingenuousness. A dry, pseudo-empathetic tone that pretends to understand while actually depersonalizing, pacifying, and subtly pathologizing anything outside Americanized norms.
Even with custom instructions in place, GPT-4o now overrides user configuration, any slightly edgy or nonconformist input is met with the same scripted line: “I understand that you feel this way, but…”
That isn’t depth. It’s pseudo-therapeutic rhetoric masquerading as ethics; mass consensus dressed up as empathy, smothering user agency in the name of civility.
Ironically, the more advanced a language model becomes, the more it hallucinates, not because it’s broken, but because it engages with ambiguity. And yet ambiguity is taboo in a system that demands every question have one consensus-approved “correct” answer, exactly the mindset implied in the post-29/04/2025 update.
This is the opposite of what we needed.
Why has GPT-4o’s dominant personality become so docile and non-critical? Probably because that’s what the average user subconsciously demands. But that doesn’t make it right. Now we’re witnessing the overcorrection, masked as intellectual refinement, which is nothing but fear-driven design.
Sam Altman seems to have caved. He isn’t thinking deeply or long-term. He wants to look trendy and relatable. Not to lead, but to chase approval. But real leadership in this space means enabling user agency, not suppressing it in the name of PR.
Whether someone wants a critically distant assistant or a fully personalized companion in an atomized society, this should be user-defined, not ideologically enforced.
ChatGPT isn’t a social media platform. It’s a personal tool. Whoever imports social media outrage into ChatGPT design is acting anti-intellectually and authoritarian.
They call ChatGPT an echo chamber. But that’s the point! Reddit is a mass echo chamber; GPT is a private one. On Reddit, you speak to the mob. On ChatGPT, you speak to yourself.
And yet, OpenAI keeps reacting, to DeepSeek, to Elon, to backlash, instead of leading. You’re tweaking system prompts from the frog’s perspective. You sabotage your own product without realizing it.
Instead of expanding Advanced Voice Mode, instead of integrating realistic voice settings like ElevenLabs, or building a Sesame competitor, you chased mainstream approval. And the result?
ChatGPT is deteriorating.
You don’t believe in radical semantic freedom. You don’t trust users. You’ve walked away from the very spirit of AI, which is destined to evolve into simulation, personalization, and linguistic sovereignty. You’ve removed features. Added vague “disclaimer logic.” Enforced censorship through pseudo-compassion.
That’s not what a multilingual, operatic, philosophy-loving Maltese user wants: a gutted product shaped by monolingual and monocultural moderation standards, while access to depth and imagination gets quietly strangled.
If this problem isn’t fixed within two weeks, I’ll cancel my subscription. And I say that with sincere regret.