One unresolved issue in Chat GPT as well as in all other artificial intelligence system, is the problem of forgetting. In a conversation chain, Chat GPT behaves somewhat like a Markov process—meaning that in long conversations, the earlier parts are forgotten unless they are explicitly recalled. In other words, the model becomes almost independent of previous messages.
Imagine that, shortly after the beginning of a conversation, you create a solid initial version of a product and intend to improve it through constructive dialogue and trial and error.
At a certain point—which we’ll call the zero point—you decide to save that version and set it as a reference for the Chat GPT . So, you might enter a prompt like:
Set this version as V1.0.
Later, after several rounds of conversation and revisions, if the changes become too complicated, you may try to return to that point by entering a prompt like:
Go back to V1.0 and discard anything after that.
This way, you can start new dialogues again from version V1.0.
However, if the conversation becomes too long and the product’s functionality breaks down, everything—the product and the conversation—may be lost. At that stage, returning to the zero point and rolling back to version V1.0 within the same conversation flow becomes practically impossible. The reason for this is precisely the Markovian nature of the conversation chain and the discarding of past information.
This shows that long chains of conversation with Chat GPT are probably not a good idea.