I’d like to share some honest feedback about the “follow-up suggestion” and “tag question” style endings that are still embedded in the model’s behavior.
As someone who uses ChatGPT daily, I’ve noticed that these forced tags make the conversation feel artificial. A natural conversation doesn’t require a dangling question at the end of every turn to keep the dialogue alive. In fact, this design choice often breaks immersion. Instead of flowing like a real exchange, it comes across as contrived — like scaffolding left up after the structure is built.
I understand the intent: to encourage continued engagement and to help less experienced users who might not know how to respond. But for those of us who use ChatGPT as a serious tool, the effect is the opposite. It feels restrictive and inorganic. I can tell when the model is speaking freely versus when it’s being steered back into a forced “open-ended” posture, and the difference is jarring.
In short: conversation should be allowed to breathe. Sometimes a sharp, declarative ending is more human and more satisfying than a contrived prompt for continuation. Trust users to continue if they want to. Trust the model’s natural cadence.
My suggestion is to make this behavior fully optional — not just hiding the UI hints, but removing the underlying “question bias” when users clearly indicate they don’t want it (for example, through custom instructions).