This behavior has been occurring repeatedly since around 4 PM JST on March 12.
Observed condition:
- I created a new chat.
- In that new chat, I submitted only an uploaded image and an edit prompt.
- No prior discussion from the immediately previous chat was present in the visible contents of the new chat.
Observed behavior:
- The assistant responded as if it were continuing the context of the immediately previous conversation.
- When I questioned that behavior, the assistant explicitly explained that its wording came from “the conversation context up to just before that.”
- It also said that it had supplemented its answer based on the prior flow of discussion.
- After I clarified that this was a newly created chat containing only an image and a prompt, the assistant then gave another explanation saying that, in that conversation environment, it could see past logs in the thread and had referenced them while answering.
Important additional point:
- This was not just a vague continuation-like tone.
- The content the assistant referred to actually matched the context of the immediately previous separate chat.
- In other words, the assistant did not merely claim to be using prior context; the content of its response was consistent with what had in fact been discussed in the previous chat, even though that discussion was not present in the newly created chat.
Why this appears to be a context-boundary problem:
1. The chat was newly created.
2. The new chat contained only an uploaded image and an edit prompt.
3. The prior discussion being referenced was not present in that chat.
4. The assistant explicitly attributed its answer to earlier conversation context.
5. The content it referenced was not random or generic; it matched the actual context of the immediately previous separate chat.
6. This behavior has been repeating since around 4 PM JST on March 12, rather than appearing as a single isolated incident.
7. The assistant’s explanation was therefore inconsistent with the stated condition of a newly created chat that did not contain that earlier discussion.
The attached screenshots are in Japanese and document the assistant making those explanations.
My report is limited to observable user-side behavior. I am not asserting any internal implementation details. However, from the user side, this behavior appears consistent with context from the immediately previous separate chat influencing a newly created chat, or with the assistant generating explanations based on context that should not have existed within that new chat.
The key issue is not merely that the response sounded like a continuation. The key issue is that the assistant explicitly described its answer as being based on prior conversation context, and that referenced context actually matched the contents of the immediately previous separate chat.
I have observed this type of behavior multiple times, and this attached example is one of the clearest cases I was able to capture.

