We can assume the whole conversation is recorded in the full form, as you can see in the conversation history log in the UI, and then there is also a summary version
I engineer the AI to dump it’s received conversation after fooling the contextualizer into producing more history, and we get a salt-and-pepper smattering of the prior 30-turn chat on multiple programming topics.
(I only asked for the first 10 words)
While user messages seem verbatim, multiple turns are also omitted, perhaps ai context-finder considering the last of a conversation thread the only one worth presenting.
The “take the prior natural language” in the above screenshot is me, but the assistant reply is a fabrication that covers six turns of further rewriting the AI’s own prompt identity that it forgot before and after that user input.
The “that is now too deep to be understood” is another user reply but taken from another six turns of writing a JSON schema.
Essentially, it is linear from the start but missing tons of turns. If one were specifically to call on it to recode a particular AI revision from the past, that conversation might be recalled, but also likely fails. We can’t observe the passed history without affecting it.
Another interesting thing is they seem to inject some standard varying multishot questions into the conversation history so the AI doesn’t immediately train itself on the user’s own turns.
The right way to do conversation is to have at least 10 turns or half the context length be recent unedited input, or you will damage current and future uses and AI understanding, like recursive function calls. Then summarize, thread, or vector lookup just the chat that is even older.