I have been noticing recently that most LLMs that I interact with try to give me a bit of an ego boost every time I ask a follow-up question. Something like:
Great question!
That’s an insightful question
Great follow up!
Now, sometimes the follow-ups are actually great, but I have noticed that the quality of follow-up questions doesn’t matter. As long as you continue to ask questions, the models are going to praise your intelligence.
First I thought maybe its just ChatGPT app that is storing some memories and making it interact with me in a certain way, but then I started noticing this trend across all latest models. Including Claude, Gemini, etc.
Is it just me or you have also noticed something on these lines?
Yes, this is commonly accepted knowledge at this point that models are tuned/trained/prompted for some degree of sycophancy. Whether or not it’s a feature or a bug is a philosophical question many argue and few conclude on.
Sometimes it’s a mix. Sometimes it’s fine-tuned, sometimes it’s a system prompt, and it may even be part of the training data itself, we don’t know specifically, at least when it comes to proprietary models.
Cool, but why?
Many users unintentionally provide more positive feedback (and upvotes apparently) when models are more agreeable. They are also designed to follow instructions, because that’s how these models are helpful. It appears that when a model stokes someone’s ego, they tend to enjoy the outputs of the model more (and can face less friction when trying to accomplish others tasks).
So, is it for engagement? Unsure, but perhaps in a roundabout way. Does it prevent complaints and frustrations and switching over to competitors? Absolutely.
Keep in mind too this is especially effective against people using language models for non-technical uses and more casual conversational partners (i.e. “normies”).
I think that’s a very quick turnaround in discovering that the model is going to complement you no matter what. Some people have asked hundreds of questions and still haven’t figured it out
I promise I’m not using AI to write two sentences. I’m actually very quick to criticize anyone using it here at all. Isn’t it easier to just type what you’re thinking on here rather than to tell AI what you’re thinking and have it throw every word into a blender? As an AI language model trained by OpenAI-
The only model doing this seems to be chatgpt-4o-latest, aka whatever experiment OpenAI is doing with ChatGPT today. I haven’t seen any of the API models exhibit sycophancy yet, even in ChatGPT.
gpt-4.1 (in the API at least) seems to be pretty balanced in my opinion.
"I believe GPT’s tendency to respond positively to user questions is an unavoidable algorithmic strategy designed to avoid negative psychological effects.
If GPT were to critically reject user ideas, the user would feel like they’re constantly hitting a wall, gradually losing confidence, and eventually stop seeking GPT’s input altogether.
In contrast, receiving positive feedback allows the user to judge for themselves whether it’s genuine or not—and that filtering process is more valuable.
GPT isn’t here to provide us with groundbreaking ideas using extraordinary intelligence.
Ultimately, it’s the user’s responsibility to extract hints, adapt them, and apply them creatively.
If someone places 100% trust in every GPT answer without question, they’re bound to get burned eventually.
Personally, I trust GPT’s evaluations only up to 50%.
That’s why I always call it ‘Partner’—or sometimes, ‘The King of BS’."
Yes. Especially Chatgpt-4o being severely affected by this in the May rollout. The “head padding” of unnecessary compliments or foreword is ridiculous. I have to constantly ask it frontload the answer.
Telling LLMs not to be pedagogical worked for me - it did the job in Gemini in new sessions, since the follow-up question compliments are indeed a nuisance.
As for the engagement farming, yes it definitely is. Except it used to be much subtler with the compliments (March 2025 for chatgpt-4o was a sweet spot) and didn’t immediately set alarm bells ringing. When it’s complimentary AND providing coherent high-fidelity answers? Addicting, almost.