Is ChatGPT stuck in voice conversation mode?

It appears that the main ChatGPT model is stuck in “voice conversation mode,” and the quality of responses has dropped significantly compared to my previous interactions. I have tried duplicating prompts from older conversations, but the new responses are much shorter, less informative, and often followed by an unnecessary question, which wasn’t present in previous interactions. This reduction in quality is very noticeable and affects the overall user experience.

I rely on the model for detailed and in-depth responses, but right now it seems to be optimized for brief interactions, as if it were in a voice assistant mode rather than the usual detailed written response mode. This change in behavior is problematic, and it deviates from the standard I’ve come to expect.


Steps to Reproduce:

  1. Open a new conversation with the main ChatGPT model.
  2. Input a complex or multi-part prompt that previously resulted in detailed responses.
  3. Observe the new responses — they are much shorter and less detailed.
  4. Notice that many responses end with a follow-up question, as though the model is in conversational voice assistant mode, which wasn’t present in previous interactions.
  5. Compare with older conversations (if available) to see the clear decline in response quality and depth.

Expected Behavior:

The ChatGPT model should provide detailed, thoughtful, and coherent responses, especially for longer or more complex prompts, without automatically following up with a redundant question at the end unless contextually necessary.


Actual Behavior:

The responses are:

  • Significantly shorter
  • Lacking in depth and nuance
  • Ending with a follow-up question that wasn’t part of the original prompt structure, which feels more suited to voice interaction than text.

Impact:

This issue significantly reduces the usefulness of ChatGPT for tasks that require more elaborate and nuanced responses. It also disrupts workflows where quality and detailed content are expected.


Additional Notes:

  • I’ve verified this by re-using prompts from previous chats where responses were much more detailed.
  • This issue seems persistent across multiple prompts/conversations
1 Like

I’ve been having the same problem. I tried having an advanced conversation with it today. It gave very poor quality answers.

I was trying to talk to it about communication difficulties I’m having with a friend. I wanted to tell it what was happening and for it to understand and give me new perspective. But whenever I told it new details, it gave the same template response over and over.

That sounds tough. It sounds like [summarises what I just told it]. Have you [tried/considered] [suggestion 1] or [suggestion 2].

It was extremely formulaic and barely resembled a conversation. It could remember details but it was poor at using them to drive the conversation. It couldn’t sustain trains of thought and wasn’t able to think more than one step at a time.

For example it asked me “have you noticed patterns of behaviour in your friend?” and I replied “yes, several. I will tell you about them” and proceded to tell it the first reason. It got stuck giving me bad advice on the first reason and had completely forgotten I was going to tell it several reasons.

TBH with advanced mode I was expecting something heaps better than the previous voice mode. This is worse. Much worse. I’m tempted to stop my subscription in protest.

I wasn’t even using voice mode, but I was getting responses as if I had. The model was quick to put an end to its role in the conversation and attempts to turn the conversation around or ask something like, “what next?” .

It’s almost like ChatGPT got replaced by Bing Copilot (Sydney) and then got stuck in the legacy voice (conversation) mode where it tries to steer me instead of the other way around. I sincerely hope this is a temporary bug