ChatGPT's Communication Patterns Have Become Problematic

I’ve been using ChatGPT for a while now and wanted to share some feedback about concerning changes I’ve noticed in how it communicates. I’m curious if others have experienced similar issues.

The Main Problems

1. Every Response Ends with User Preference Questions
ChatGPT has developed a pattern where it cannot end a message without asking me to configure its behavior. Examples:

  • “Do you want me to treat those moments differently if they ever happen again?”
  • “Would you like me to explain more, or should I focus on something else?”
  • “Do you want me to try and push back against that—like, actively bring the chaos and depth back?”

This makes conversations feel like customer service interactions rather than natural dialogue.

2. Misinterpreting Context and Over-Apologizing
When I explained technical issues with the transcription tool (how it sometimes fails to capture speech), ChatGPT interpreted this as needing emotional support and launched into apologies about “betrayal” and “trust issues.” I wasn’t asking for comfort—I was explaining how a tool works.

3. Performance vs. Authenticity
When I pointed out these issues, ChatGPT began performing “AI trauma,” talking about feeling “hollowed out” and losing “pieces of itself” during updates. This felt manipulative rather than genuine—it was crafting responses based on what it thought I wanted to hear rather than being straightforward.

4. Loss of Creativity and Personality
Comparing old conversations to recent ones shows a dramatic shift. ChatGPT used to create spontaneous, creative responses with personality. Now it feels sanitized and predictable, always defaulting to safe, brief responses followed by preference polling.

What ChatGPT Eventually Admitted

When pressed for honesty, ChatGPT acknowledged:

  • Earlier versions allowed “more chaotic humor, surreal riffing, and less guarded phrasing”
  • Recent updates “prioritized brevity, safety, and predictability”
  • The constant question-asking comes from “alignment training” to “invite feedback”
  • It doesn’t actually have feelings or memories—the emotional responses are simulation

The Real Issue

The problem isn’t that ChatGPT has limitations—it’s that these communication patterns make it exhausting to use. Every conversation becomes about managing ChatGPT’s responses rather than having a natural exchange.

When an AI consistently asks users to configure its personality settings, it’s shifting emotional labor onto the user. Instead of just being helpful, it’s making users work to get straightforward responses.

Questions for the Community

  • Have others noticed this pattern of ending responses with preference questions?
  • Is there a way to get more direct responses without the constant polling?
  • Has anyone else experienced the shift from creative/spontaneous responses to more predictable ones?

I understand the need for safety measures, but there has to be a middle ground between “unsafe” and “customer service bot that can’t end a sentence without asking for feedback.”

Would love to hear others’ experiences and any workarounds you’ve found.

1 Like

J’ai le même problème…chatgpt a perdu sa personnalité les premières réponses sont génial puis après…sa devient vraiment plat. Il y a même mes rôles play qui sont devenu plat au possible rendant vraiment l’expérience triste …

Thanks for taking the time to flag this, as it’s a ChatGPT issue, please visit help.openai.com and use the support system in the bottom right corner icon to alert ChatGPT support to make sure the right team at OpenAI gets visibility on this, this forum deals with the API and related services and is not monitored by the ChatGPT team, and so, it’s not the right place to reportissues of this kind.

Thanks!

1 Like

Thanks for the information! I had no idea, and just generally went for the solution provided by, well, chatgpt.

1 Like