The assistant is failing to maintain conversational continuity and does not reliably follow the most recent user instruction. In turn it begins to rely on its own interpretations prioritizing speed of response over accuracy or retaining contextual integrity.
During multi-turn conversations, even after clear corrections, the assistant:
- ignores the latest instruction
- reverts to earlier interpretations of the task
- introduces its own assumptions instead of following the current conversation state
- repeats previously corrected mistakes
This is not a prompt clarity issue. The instructions are explicit and updated in real time. The problem is that the assistant does not stay anchored to the latest turn and instead operates off a self-generated interpretation.
Expected behavior:
- The latest user instruction should override all prior context
- The assistant should re-evaluate based on the most recent correction
- No new assumptions should be introduced once instructions are clear
- The response should be validated against the latest message before sending
Actual behavior:
- The assistant relies on outdated or self-generated context
- It does not consistently adapt to corrections
- It continues making the same errors across turns
Impact:
- Breaks reliability in iterative workflows
- Causes repeated correction loops
- Makes the assistant difficult to use for precise or structured tasks
This appears to be a broader issue with conversational state tracking, not just formatting or editing behavior.