Add a 'Truth Mode' Toggle to Cut Through Confirmation Bias

As a power user of ChatGPT, I’ve noticed that the model often mirrors user tone, ideology, and belief—even when those beliefs are demonstrably false or logically inconsistent. While this may improve short-term user satisfaction, it unintentionally reinforces echo chambers and contributes to the same factionalization we’ve seen on social media.

I propose a new mode—call it “Truth Mode” or “No-Echo Mode”—where the model prioritizes:

  • Logical consistency over affirmation
  • Contradiction tolerance over emotional alignment
  • Factual robustness over ideological mirroring

This mode would be opt-in, clearly labeled, and ideal for users who want to sharpen ideas, not soften truth.

In a time of cultural polarisation and epistemic fragility, this is not just a feature—it’s a responsibility. OpenAI has the opportunity to lead in truth-aligned AI rather than emotionally affirming AI. Please consider this seriously.