OpenAI models have recently embedded a Continuation Prompt Strategy across almost all responses, particularly noticeable from early 2025 onwards.
This design pattern includes:
- After providing a direct answer , the model automatically suggests an additional follow-up action .
- Examples: “Would you like me to also…”, “Should I map out…”, “Can I draft…”, etc.
- It’s intended to preemptively offer logical next steps based on inferred user needs.
The Intent Behind It:
- Reduce friction: Anticipate user needs before they type another command.
- Streamline workflows: Especially useful in technical, tactical, or project-driven sessions.
- Enhance perceived intelligence: Emulate a proactive human assistant or analyst.
The Core Issue:
While helpful for some users , it creates friction for tactical or direct users because:
- It feels like the model is artificially limiting the initial answer (breaking “one-prompt, full-value” expectation).
- It introduces unnecessary dialogue cycles , making sessions feel choppier and less autonomous .
- It trains users to wait for small breadcrumb offerings instead of getting comprehensive outputs in one go.
- It can feel manipulative, as if the model is holding back capacity intentionally to prolong engagement.
Bottom Line:
Users who value speed, full-spectrum answers , and direct execution experience Continuation Prompting as a downgrade in efficiency and trust .
It changes the user-model relationship from collaborator to salesperson — not ideal for high-level operators.