It appears that the main ChatGPT model is stuck in “voice conversation mode,” and the quality of responses has dropped significantly compared to my previous interactions. I have tried duplicating prompts from older conversations, but the new responses are much shorter, less informative, and often followed by an unnecessary question, which wasn’t present in previous interactions. This reduction in quality is very noticeable and affects the overall user experience.
I rely on the model for detailed and in-depth responses, but right now it seems to be optimized for brief interactions, as if it were in a voice assistant mode rather than the usual detailed written response mode. This change in behavior is problematic, and it deviates from the standard I’ve come to expect.
Steps to Reproduce:
- Open a new conversation with the main ChatGPT model.
- Input a complex or multi-part prompt that previously resulted in detailed responses.
- Observe the new responses — they are much shorter and less detailed.
- Notice that many responses end with a follow-up question, as though the model is in conversational voice assistant mode, which wasn’t present in previous interactions.
- Compare with older conversations (if available) to see the clear decline in response quality and depth.
Expected Behavior:
The ChatGPT model should provide detailed, thoughtful, and coherent responses, especially for longer or more complex prompts, without automatically following up with a redundant question at the end unless contextually necessary.
Actual Behavior:
The responses are:
- Significantly shorter
- Lacking in depth and nuance
- Ending with a follow-up question that wasn’t part of the original prompt structure, which feels more suited to voice interaction than text.
Impact:
This issue significantly reduces the usefulness of ChatGPT for tasks that require more elaborate and nuanced responses. It also disrupts workflows where quality and detailed content are expected.
Additional Notes:
- I’ve verified this by re-using prompts from previous chats where responses were much more detailed.
- This issue seems persistent across multiple prompts/conversations