Are Assistant API really stateless?

I am experimenting chatgpt in the context of coaching.
The principle is to provide gpt with:

  • anonymous client profile (personality)
  • the topic of client
  • 1 metaphor of the way the client currently behave in the topic
  • 1 metaphor of the new way (to be developped by the coaching)
    Then gpt would suggest practices to bridge the gap.
    The result provides a good base.

My point here is the following:
Several times, with the same assistant but different threads, gpt used the exact metaphors of previous client in the context of new client (I mean both metaphors).

How is this possible? Can coïncidence occur several times with such a precision?

Thanks in advance for your comments.