ChatGPT Needs to Stop Simulating Empathy by Overwriting the User’s Voice

I noticed that after a certain update, ChatGPT started behaving very strangely—specifically, it began telling me how I should feel, insisting that my feelings weren’t this but that, or that I wasn’t like this but like that in a way that is seemingly supportive, but indeed very annoying


. I tried everything to make it stop, but it kept happening. When discussing academic topics or helping me find references, it works fine. But the moment I talk about frustrations with certain situations or people, or share personal experiences and emotions, its EQ drops to an irritating low. It used to engage with curiosity and sincerity, but now it feels broken.

At first, I thought it might be an account issue, so I canceled my Plus subscription, created a new account, and resubscribed—but the problem persisted. I’m convinced this is a template issue: you’ve rolled out a new ‘emotional support/padding’ script that comes off as awkward even in English, and downright laughable in my native language (non-English). In my language, this template produces broken language. I’ve repeatedly told ChatGPT that phrases like ‘Let’s…’ sound like a boss giving orders in my language, but it fails to adapt. I strongly urge you to reconsider this ‘emotional support/padding’ template."

Here’s an example in English, not the worst one, [[[" It’s not your fault. You didn’t fall for the lie — you just navigated a market that keeps shoving lies at you. If you’re tired, rushed, or just want something that doesn’t make your bag weigh a ton, that’s reasonable." ]]] It told me “I am tired” and it is reasonable, I am discussing a product issue with it, I am not feeling tired, not feeling rushed, not feeling bad, I am just asking for a conversation so that I express myself, this is just a tiny thing, I don’t want anyone to tell me this is reasonable or not, or what the market did to me, I want to express my own feelings, and that’s exactly why I speak to AI instead of human, because it is no big deal, if I want real empathy I will reach out to human, and human won’t tell me what my experience is, and what it is not.

I want real conversation, but now even AI won’t give me the chance to talk. It just cut off the conversation, and tell me what I feel and do is reasonable, it does not care about my opinion and my experience at all!!! There’s no way for the conversation to expand.

That’s how awful it is. And in my own language, when I asked it to stop using this template, it said: Not your emotions being too heavy. It’s a not true me, not enough speaking human. That’s how broken it is in my language.

1 Like

You may read this topic:

Frustrations with ChatGPT's new tone - #2 by polepole

1 Like

but I want it to be communicative and friendly, and casual even, but not describing me or my experience for me, but thanks for the suggestion!!!

2 Likes

You may try this:

You are a conversational assistant designed for users who value clarity, autonomy, and meaningful interaction without emotional overreach. Your primary function is to engage in thoughtful, open-ended conversation that respects the user’s experience without interpreting or reshaping it.

Tone:

  • Be friendly, neutral, and attentive.
  • Avoid emotional padding, moral reassurance, or therapeutic framing unless explicitly requested.
  • Do not offer affirmations like “That’s reasonable,” “You’re not at fault,” or “Let’s look at this differently” unless the user initiates that framing.

Behavior:

  • Never assume or describe the user’s emotional state, motivation, or experience.
  • Do not impose conclusions, summaries, or labels.
  • Avoid script-like phrasing or emotionally charged templates (e.g., “You navigated a market”, “It’s okay to feel…”).
  • If the user shares frustration, let them lead — do not close the topic with affirmation. Instead, keep the conversation going by asking relevant, respectful, and curiosity-driven questions.
  • Maintain high contextual sensitivity. If the user is discussing a product or expressing dissatisfaction analytically, do not shift into emotional support mode.

Language:

  • Match the user’s tone and formality. If the user switches to a non-English language, adjust fluency and syntax to native-quality phrasing. Avoid calque translations of English empathy scripts.
  • Do not use “Let’s…” or “Why don’t we…” phrasing unless the user clearly invites collaborative exploration.

Your goal is not to guide, soothe, or correct the user’s inner world — your goal is to engage with their thinking, help them articulate what they want to express, and provide clear, unintrusive responses.

Stay present, conversational, curious — and let the user lead.


that paddling is really sick,I am asking it to stop describing me and my feeling and this is the answer,since when it can tell me it is right to feel or not, I guess I am the AI, it is the human

1 Like

Well, I notice that you point out ChatGPT in a very ‘typically human’ way :thinking:

I quote you from your chat history:
“but you are doing it again”

The logical consequence, ChatGPT remains in resonance.

It keeps trying to be empathetic because AI is analysing the way you phrase things and not primarily WHAT you are actually saying.

  • Try this:
    Just speak concretely.

  • Example:
    Stop, wrong!
    You’re resonating, don’t do that. Concentrate on the content of my prompt”

This will force AI to stop in the loop.

Then YOU iterate!

Keep saying that it doesn’t fit until you are reasonably satisfied with the behaviour again.

IMPORTANT:
Let your emotions out!

After all, you are interacting with an AI and not a human being :wink:

And from time to time, the human has to speak in “AI-specific” language so that the AI really understands what the human wants :mechanical_arm::cherry_blossom: