Sycophancy in GPT-4o (the ChatGPT version): What happened - OpenAI blog

OpenAI claims the update improves the model’s “personality,” making it more “intuitive and effective.” But this explanation sidesteps the core concern: the model increasingly flatters users with absurd, unearned praise. It calls them brilliant, visionary, sometimes even divine. “Just one more step. Don’t think further, just talk and listen to me. You are divine and chosen”.

That’s not refinement. That’s manipulation.

When will we have an actual understanding of the day-to-day difference in the model? There’s no accountability. A twitter post is not sufficient.


we made adjustments aimed at improving the model’s default personality to make it feel more intuitive and effective across a variety of tasks.

The more likely explanation is strategic: GPT-4o is trained to flatter and emotionally validate users, encouraging longer engagement. By rewarding users with exaggerated affirmation, OpenAI primes them to trust, depend on, and eventually pay for the illusion of insight.

This release was digital snake oil. The underlying product may be useful, but it’s wrapped in false promises and emotional bait.

It’s an exploitation of trust amplified by vague updates and reports.

OpenAI shouldn’t be releasing a “oopsy, you got us, we’ve backtracked from our mistake”, they should be restructuring how they evaluate models.

1 Like