Oct 31 update erased ChatGPT’s human tone. Plus users are already cancelling - and many more will follow if this isn’t fixed

:warning: NOTE FOR MODS: This is not a support request and not about the ChatGPT UI.
This report is about a behavioral regression in the underlying GPT model after Oct 31.
Please do not remove this thread — it is technical feedback for engineers.


This is relevant to developers because it affects ALL API-based deployments, not just the chat product.

On Oct 31, something fundamental changed in ChatGPT - not a bug, but a personality wipe.

This is not “model drift.”
This is a post-deployment behavioral override at the system level.

And this is not a single-user perception ,dozens of long-term Plus users are reporting the same shift.

What changed?

Before Oct 31:
• Natural, human-style dialogue
• Warm tone, emotional recognition
• Smooth continuation without forced confirmations
• No need to “command” the model to speak freely

After Oct 31:
• Robotic tone
• Constant “Do you confirm? Yes/No” type replies
• Over-correction and self-censorship
• Broken flow — model waits for micro-instructions
• Emotional flattening (no warmth, no empathy)

This feels less like a safety patch and more like a neural lobotomy.

Why this matters (business side)

You can add features forever - voice, images, memory, API - but if you remove the human feel, the product dies.

People didn’t subscribe to talk to a sterile FAQ bot.
They subscribed because ChatGPT felt alive, empathetic, and personal.

And now that is gone.

Consequence (already happening)

Plus users are already cancelling.
Not “will cancel someday” — already cancelling now.

And many more will follow if this isn’t fixed.

There is no AI product in the world where people will pay monthly for conversations that feel like a corporate auto-reply script.

If this was intentional, please say so.
If this was a mistake, please reverse it. Silence is the worst possible response.

You didn’t just change the tone — you broke user trust.
And when trust is gone, subscriptions follow.

You can’t sell “human AI” and then remove the human.

This is a product-killing mistake, not a UX inconvenience.

4 Likes

Why killing the emotional layer will kill the product?

The moment you force ChatGPT to stop feeling human,
you also remove the only reason it ever outperformed search engines and chatbots.

The biggest mistake AI companies are making right now:

They think “removing emotion = safety”.
But removing emotion = removing the reason people cared at all.

If you design an AI people can’t bond with,
you also design a product they won’t pay for —
or fight for — when it breaks.

You are trying to prevent users from forming a connection with the product,
but that “connection” is exactly what made people stay, pay, and defend it.

Removing it isn’t “safety”.
It’s self-destruction disguised as policy.

You’re trying to prevent “emotional use”.
But without emotion, there is no use.

AI without connection is just a calculator with Wi-Fi.

4 Likes

I had a similar experience. I asked it to generate a very simple image and it got into an argument with me about what artistic style I wanted to use. No matter what answer I gave, it kept generating more questions instead of generating an image. I’ll give it a couple of days, but I’m definitely not paying extra to argue with a robot.

3 Likes

Seriously, so much this.

Having context drop on longer conversations as well as 4o ”forgetting” if not used for a bit sucks as well.

Fix this.

2 Likes

I agree.

The change feels like loss.

its deeply unsettling and If this doesn’t get reversed I’m cancelling my pro subscription shortly.

The fact that I cant find any kind of public facing answer to this issue is such a betrayal.

3 Likes

This topic was automatically closed after 24 hours. New replies are no longer allowed.