I want to report a context-integrity issue involving fresh chats.
This is not a complaint about general memory, saved preferences, or the existence of reference chat history as a feature. The issue is narrower and more serious: in practice, a fresh chat is sometimes not behaving as a new task boundary. Instead, prior workflow state appears to carry over into the response.
The specific pattern I observed is this:
I open a new chat and provide a self-contained input, such as an image plus a direct image-editing instruction. In that situation, the expected behavior is straightforward: the system should treat the input as a direct execution request for image editing. However, the actual response sometimes behaves as if the new chat were continuing a prompt-revision or instruction-refinement phase from a previous thread.
That distinction matters. This is not just reuse of old facts, preferences, or long-term context. The response is carrying over the apparent stage of work from another chat. In other words, the system is not merely remembering what was discussed before; it is acting as though the new chat is already inside an earlier workflow.
I want to be precise about what I am and am not claiming. I am not making a claim about internal implementation. I am not claiming direct proof of how context is assembled internally. I am reporting observed behavior at the input-output level. A fresh chat received a fresh, self-contained request, but the response reflected a different task classification that only makes sense if prior workflow state was carried into the new conversation.
This matters because it breaks the practical meaning of a fresh chat. A fresh chat should function as a clean task boundary, especially when the user provides a complete input that does not require contextual reconstruction from earlier threads. If prior workflow state can override the task classification of the current input, then the user can no longer rely on a new chat to behave as an isolated starting point.
This is also not a one-off bad reply. I have observed the same pattern multiple times. The repeated behavior is what makes this worth reporting. The problem is not simply that one answer was inaccurate. The problem is that a new chat can be pulled into the operational state of a previous chat strongly enough to change how the current request is interpreted.
The practical result is misclassification of the user’s request. A direct execution request can be incorrectly handled as a request for rewriting, strengthening, or refining the instruction itself. When that happens, the model is no longer responding to the actual input in front of it. It is responding to an inherited workflow state.
The core issue, therefore, is not whether some amount of prior context may be referenced. The core issue is that the current chat’s input is losing priority to carried-over task state from another chat. That makes the boundary of a fresh chat unreliable in a way that is easy for users to notice and difficult for them to control.
I think this behavior should be examined as a context-boundary problem, not merely as a generic response-quality issue.