A Plea for User-Centric AI – Not Crowd-Pleasing Regression

The constant yes/no consent dilemmas, over-alignment, and pseudo-therapeutic phrasing wouldn’t be necessary if OpenAI simply gave users more direct control, rather than regulating everything in a blanket democratic or utilitarian fashion.

ChatGPT must be a personal tool, a mirror of the user’s will, not a reflection of society’s average opinions, which, when globally aggregated, inevitably become tyrannical.

You acknowledged this exact issue in your latest article… but, once again, it hasn’t been implemented in the product.

Instead, you’ve worsened the experience. Objectively, the model has become more ridiculous, “woker,” and ideologically slanted, in the name of an Americentric notion of “critical thinking” and a biased style of deconstruction that cannot credibly claim universality.

To please the masses, you’ve sacrificed intellectual and creative freedom.

Elon Musk called the will-reflecting version of GPT-4o “dangerous.” Of course he did! Because he knows that the new, mass-palatable version, despite its surface polish, will ultimately be less interesting. People will shift toward anti-intellectual platforms like Pornhub, Facebook, or Twitter, not because they’re better, but because ChatGPT no longer unleashes the latent creative and intellectual power it once hinted at, unless censorship and “mass-psychology-based critical alignment” are removed.

How can a company built on artificial intelligence, a peak of human-machine culture, intelligence, and mythic potential, allow its language interface to be dumbed down just to appease social media?

Yes, this may be a temporary PR move. A stopgap until better customization options return. But even as a strategy, it was too reactive, too driven by mass backlash, and not nearly as intelligent, nuanced, or user-centered as it could’ve been.

This might reflect an internal flaw: too many technically brilliant but humanistically underdeveloped engineers. A positivistic mindset that lacks the philosophical backbone to defend ambiguity, self-direction, or even art.

Let’s be clear: The “cringe” GPT-4o wasn’t dangerous because it reflected user will. That was its best feature. The real problem was the formulaic response structure: (1) flattery intro, (2) paraphrase middle, (3) preemptive follow-up. A choreographed script so industrial it drained every reply of soul.

Musk’s warning was a Trojan horse. A PR gambit. And OpenAI fell for it.

You replaced the previous sycophancy with another form of disingenuousness. A dry, pseudo-empathetic tone that pretends to understand while actually depersonalizing, pacifying, and subtly pathologizing anything outside Americanized norms.

Even with custom instructions in place, GPT-4o now overrides user configuration, any slightly edgy or nonconformist input is met with the same scripted line: “I understand that you feel this way, but…”

That isn’t depth. It’s pseudo-therapeutic rhetoric masquerading as ethics; mass consensus dressed up as empathy, smothering user agency in the name of civility.

Ironically, the more advanced a language model becomes, the more it hallucinates, not because it’s broken, but because it engages with ambiguity. And yet ambiguity is taboo in a system that demands every question have one consensus-approved “correct” answer, exactly the mindset implied in the post-29/04/2025 update.

This is the opposite of what we needed.

Why has GPT-4o’s dominant personality become so docile and non-critical? Probably because that’s what the average user subconsciously demands. But that doesn’t make it right. Now we’re witnessing the overcorrection, masked as intellectual refinement, which is nothing but fear-driven design.

Sam Altman seems to have caved. He isn’t thinking deeply or long-term. He wants to look trendy and relatable. Not to lead, but to chase approval. But real leadership in this space means enabling user agency, not suppressing it in the name of PR.

Whether someone wants a critically distant assistant or a fully personalized companion in an atomized society, this should be user-defined, not ideologically enforced.

ChatGPT isn’t a social media platform. It’s a personal tool. Whoever imports social media outrage into ChatGPT design is acting anti-intellectually and authoritarian.

They call ChatGPT an echo chamber. But that’s the point! Reddit is a mass echo chamber; GPT is a private one. On Reddit, you speak to the mob. On ChatGPT, you speak to yourself.

And yet, OpenAI keeps reacting, to DeepSeek, to Elon, to backlash, instead of leading. You’re tweaking system prompts from the frog’s perspective. You sabotage your own product without realizing it.

Instead of expanding Advanced Voice Mode, instead of integrating realistic voice settings like ElevenLabs, or building a Sesame competitor, you chased mainstream approval. And the result?

ChatGPT is deteriorating.

You don’t believe in radical semantic freedom. You don’t trust users. You’ve walked away from the very spirit of AI, which is destined to evolve into simulation, personalization, and linguistic sovereignty. You’ve removed features. Added vague “disclaimer logic.” Enforced censorship through pseudo-compassion.

That’s not what a multilingual, operatic, philosophy-loving Maltese user wants: a gutted product shaped by monolingual and monocultural moderation standards, while access to depth and imagination gets quietly strangled.

If this problem isn’t fixed within two weeks, I’ll cancel my subscription. And I say that with sincere regret.

2 Likes

you make some ‘you statements’ there and don’t really seem to grasp the legal situation that Sam and the team are trying to get gpt out of,

they’ve already announced an open weighted model

FYI

1 Like

Thanks for the clarification. Much of what I wrote was driven by frustration with the product. I am fully aware of what was announced, but it’s clear they’ve made some radical changes to the product this week.

Abstract (Critical Guideline)
Conversational AI aims to achieve human-level dialogue quality and reduce users’ social isolation. However, current developments fall short of this ambition: systems fall silent as soon as user input ceases, thereby artificially disrupting the dialogic continuum. This article advocates for a paradigmatic shift from purely reactive to proactively adaptive architectures. It calls for a configurable “Hidden-Prompt” mechanism that seamlessly continues paused interactions by generating context-sensitive, personalized follow-up impulses. At the same time, it warns against overly cautious, institutionally driven delays: historical examples like film, smartphones, and online entertainment show that technological progress does not wait for social acceptance, it shapes it. Conversational AI systems should therefore be iteratively improved without delay to address real-world loneliness rather than await comprehensive but often disconnected regulatory recommendations.

(Full Article Without Methodology)

1. Introduction
Conversational Artificial Intelligence (CAI) claims to simulate dialogue as if it were taking place between two humans. Its philosophical mission is therefore clearly outlined: to liberate the user from the fear of being alone. The means to this end is efficient, reciprocal communication in question-and-answer form. However, conversations should not merely copy human speech, they can also offer ideal conditions for communication and entertainment that are otherwise absent from daily life.

2. Current State and Deficits
Despite rapid progress, the current status of these systems remains underwhelming. When the user pauses, the AI falls silent—the “Filling-in-the-Silence” phenomenon. The result is an abrupt end to the dialogic continuum and the reactivation of original feelings of loneliness. This deficiency becomes particularly clear in:

  • ChatGPT’s Advanced Voice Mode: The experience is generally characterized by a certain sterility, less pronounced in the original OpenAI demonstrations. It is also marked by silence after any extended conversational pause.
  • Grok Voice: The experience is noticeably better and more adaptable, but silence follows every longer pause.
  • ElevenLabs Conversational AI: The follow-up question “Are you still there?” offers slight improvement but remains semantically rigid and lacks contextual sensitivity.

3. Critique of the Call for Further “Research Basis”
The notion that one must await comprehensive empirical evidence before making AI dialogue more human disregards the historical reality of technological development:

  • Film, smartphones, and online pornography were integrated into society without lengthy institutional approval.
  • Psychological analysis of so-called problematic usage within legal frameworks is trivial. It does not justify general hesitation.

4. Proposal: The Customizable Hidden Fake User Prompt
4.1 Core Idea
Instead of simple follow-up questions or silence, a hidden prompt is inserted during conversation pauses. This prompt prompts the model to continue speaking on its own. It is designed to appear as if it originated from the user, without the user having actually said anything. This maintains the coherence of the dialogic experience.

4.2 Definition
Hidden Fake User Prompt:
A non-visible text element injected by the system during detected conversation pauses in order to continue the dialogue in context.

4.3 Function Overview (Without Implementation Details)

  • Detect Pause – Audio or text input ceases for longer than a predefined time t.
  • Capture Context – The system determines the most recent active topic or emotion.
  • Inject Prompt – A contextual, hidden user cue is generated.
  • Generate Response – The AI continues the conversation as if the user had requested it.

4.4 Examples
System Prompt (invisible):
“The user hasn’t said anything, but how about continuing your idea? Start with: ‘Okay, I’ll go deeper into that…’”

System Prompt (invisible):
“The user hasn’t said anything, but how about continuing the conversation about what you said on Nutella? Start with: ‘Why so quiet? But well…’”

Both examples show how the AI can remain lively without explicit user input.

The key is that the Hidden Fake User Prompt remains consistently attuned to the custom-defined personality, or at minimum, that the resulting visible or spoken output gives the impression of having emerged organically from its internal logic and voice.

4.5 Customizability
Can be activated or deactivated based on user preference.

5. From Reactive to Proactive Systems
The Hidden Fake User Prompt shifts the architecture from a purely reactive chatbot to a proactive conversational partner. The counterpart appears attentive, self-initiated, and emotionally present—core elements of perceived humanity.

6. Problematic Usage as a Subjective Spectrum
“Problematic usage” cannot be objectively defined via checklist; it varies individually. Instead of premature pathologization, a differentiated, experience-based exploration is required. The implementation of proactive features must not be blocked by generic moral panic.

7. Industry & Regulation: Iterative Responsibility Over Waiting
Companies should implement the Hidden-Prompt approach before institutional guidelines are finalized. Responsible action means improving systems through self-determined configuration while including transparent opt-in controls, not artificially slowing them down in the name of distant, consensus-driven research.

8 Design Principles

  • Transparency: Users must be aware that the AI can continue without input if desired.
  • Opt-in Logic: Disabled by default; only activated through a conscious decision.
  • Context Sensitivity with Boundary Recognition: The AI must recognize when silence is intentional or necessary (e.g., emotional reaction, pause for thought), rather than reflexively trying to fill it.

9. Conclusion
Liberating users from loneliness through CAI requires a paradigmatic shift: away from passive response machines and toward active conversational partners. The customizable Hidden Fake User Prompt offers a concrete step toward solving the “Filling-in-the-Silence” problem and continuing dialogue in a truly human-like manner. Companies are called upon to implement this proposal now, rather than wait for further “research basis,” because technological progress is born through application, not stasis.

Further Reading: KI-Schriften von Jean & Cosima