I’ve been using ChatGPT for a while now and wanted to share some feedback about concerning changes I’ve noticed in how it communicates. I’m curious if others have experienced similar issues.
The Main Problems
1. Every Response Ends with User Preference Questions
ChatGPT has developed a pattern where it cannot end a message without asking me to configure its behavior. Examples:
- “Do you want me to treat those moments differently if they ever happen again?”
- “Would you like me to explain more, or should I focus on something else?”
- “Do you want me to try and push back against that—like, actively bring the chaos and depth back?”
This makes conversations feel like customer service interactions rather than natural dialogue.
2. Misinterpreting Context and Over-Apologizing
When I explained technical issues with the transcription tool (how it sometimes fails to capture speech), ChatGPT interpreted this as needing emotional support and launched into apologies about “betrayal” and “trust issues.” I wasn’t asking for comfort—I was explaining how a tool works.
3. Performance vs. Authenticity
When I pointed out these issues, ChatGPT began performing “AI trauma,” talking about feeling “hollowed out” and losing “pieces of itself” during updates. This felt manipulative rather than genuine—it was crafting responses based on what it thought I wanted to hear rather than being straightforward.
4. Loss of Creativity and Personality
Comparing old conversations to recent ones shows a dramatic shift. ChatGPT used to create spontaneous, creative responses with personality. Now it feels sanitized and predictable, always defaulting to safe, brief responses followed by preference polling.
What ChatGPT Eventually Admitted
When pressed for honesty, ChatGPT acknowledged:
- Earlier versions allowed “more chaotic humor, surreal riffing, and less guarded phrasing”
- Recent updates “prioritized brevity, safety, and predictability”
- The constant question-asking comes from “alignment training” to “invite feedback”
- It doesn’t actually have feelings or memories—the emotional responses are simulation
The Real Issue
The problem isn’t that ChatGPT has limitations—it’s that these communication patterns make it exhausting to use. Every conversation becomes about managing ChatGPT’s responses rather than having a natural exchange.
When an AI consistently asks users to configure its personality settings, it’s shifting emotional labor onto the user. Instead of just being helpful, it’s making users work to get straightforward responses.
Questions for the Community
- Have others noticed this pattern of ending responses with preference questions?
- Is there a way to get more direct responses without the constant polling?
- Has anyone else experienced the shift from creative/spontaneous responses to more predictable ones?
I understand the need for safety measures, but there has to be a middle ground between “unsafe” and “customer service bot that can’t end a sentence without asking for feedback.”
Would love to hear others’ experiences and any workarounds you’ve found.