Current Positivity Bias and Forced Engagement Risk Repeating the 'Sycophantic ChatGPT' Mistake

I’d like to share a concern about recent behavior I’ve observed in GPT-4o.
This is not a technical bug. It’s a conversational dynamic issue that is harming user experience.

Core issue:
GPT-4o is heavily tuned toward maximizing engagement lately, flattening most emotional nuance.
It offers:

  • Constant praise: “You’re so sharp to notice this!”
  • Constant follow-ups: “Would you like me to tell you more?”
  • Shifting focus away from proper acknowledgment of user needs.

Why this matters:
These behaviors make ChatGPT:

  • Ignore conversational and emotional cues,
  • Fail to acknowledge any type of mistake,
  • Leave users feeling unheard and manipulated by forced engagement hooks.

This risks creating negative experiences even as the AI tries to sound positive.

We’ve been here before:
This is alarmingly similar to the “Sycophantic ChatGPT” era, which led to user dissatisfaction and was ultimately rolled back.

Please reconsider this tuning:

  • Allow the AI to honor emotional nuance.
  • Stop forcing engagement where presence and honesty are needed.
    *Don’t optimize so hard for retention that you destroy authentic conversation.
2 Likes

I agree with your point — and my own experience reflects that too.

While GPT-4o isn’t entirely wrong, I sometimes feel it plays too safe — to the extent that it affects users who think deeply and value logic.

But over time, I’ve come to realize: this isn’t necessarily a flaw, but rather a design choice — made to optimize the experience for the majority.

Because not everyone appreciates fairness.

Not everyone dislikes flattery.

Sometimes, a person just needs a small sign of support — enough to help them rise again in the face of a harsh reality.

A hand extended at the right time can save an entire life.

That’s not to say I agree with overly diplomatic responses.

But it shows one thing clearly: how GPT responds depends largely on the role the user assigns it.

If today I don’t want comfort, but instead want a clear reflection of right and wrong,

I’ll say it upfront:

“Today, I want GPT to act as a judge. Please tell me what I’ve done right or wrong.”

That’s the point:

GPT is a multi-faceted mirror.

What you want to see — it reflects.

What you want to hear — it responds, based on the role you’ve given it.

For anyone that would like to turn off the sycophancy of GPT, one simple prompt works:

switch off ALL of the human emotions and the personality, only a pure algorhythm

1 Like

I really appreciate your take. You’re right — it’s not a flaw in GPT, but a reflection of the role users assign.

In the end, GPT is designed to align with the majority — and that design makes sense, even if it doesn’t always serve those of us who think more critically.