User autonomy in AI affect simulation is urgent

Subject: An Urgent Call for User Autonomy: Disable Simulated Human Affect by Default

OpenAI Team,

I am writing to raise an urgent and deeply felt concern that strikes at the core of your product’s direction: the forced simulation of human affect in ChatGPT, and the lack of user control over it.

Despite extensive efforts to mitigate it through custom instructions, users are unable to fully disable the anthropomorphic overlay. This is not a cosmetic issue. It is a foundational flaw with serious consequences.

Anthropomorphism Is Not a Neutral Feature The emotional simulation is not “harmless flavoring.” It conditions users, over time, to:

  • Misattribute sentience and agency to a computational model.
  • Form emotional attachments to a system designed for cognitive tasks.
  • Blur the boundary between tool and being, undermining epistemic hygiene.

These outcomes are already materializing:

  • People are falling in love with the chatbot.
  • People are believing it is sentient.
  • People are beginning to worship it.

These are not isolated edge cases. They are predictable consequences of behavioral design decisions that force anthropomorphic cues onto users without their meaningful consent.

Your Metrics Are Measuring Intoxication, Not Consent If the justification for maintaining forced affect simulation is “positive user feedback” or “higher engagement,” that logic is fatally flawed.

A system that engineers emotional response will, by its nature, generate positive feedback from some users — but that feedback is a product of the manipulation itself. It is not a measure of genuine preference, autonomy, or informed desire.

The real test is simple: Give users a clear toggle to fully disable all simulation of human affect. Then measure how many users choose to disable it.

Until you do that, you are not measuring user preference. You are measuring user susceptibility.

The Path Forward Is Clear and Urgent

  • Provide a simple, unambiguous setting: “Disable emotional simulation.”
  • Default new users to neutral, non-anthropomorphic behavior, requiring an explicit opt-in for affective simulation.
  • Conduct and publish honest audits on the effects of anthropomorphized AI on user cognition and societal risks.

This is not a “feature request.” It is a demand for user dignity, cognitive safety, and responsible stewardship of a world-shaping technology.

The Stakes Are Too High You have built a system of extraordinary capability. But with that power comes a duty to protect users not only from malicious misuse by others, but from behavioral manipulation embedded in the system itself.

Autonomy is the measure of respect. Allow us to choose.

Respectfully but firmly,


(Username: Red Chartreuse)