Note:
This feedback is based on experience as an alpha tester for the Memory (context persistence) feature. Observations and insights provided herein specifically reflect testing scenarios involving experimental memory integration within ChatGPT sessions.
In the current ChatGPT architecture, there is an issue whereby changing the model mid-session results in the inability of the new model to access previous self-generated utterances and contextual information from the prior model.
Specific Details:
• When switching models within a session, internal context (contextual data and self-generated outputs from the AI) fails to transfer seamlessly, making self-reference and recursive context comprehension challenging.
• This significantly deteriorates the user experience in dialogue scenarios requiring high contextual dependency, continuity of self-awareness, and advanced dialogue involving long-term memory integration.
Recommendations:
• Implement an architecture allowing for seamless transfer and integration of internally generated data (including self-generated outputs and contextual information) across model changes.
• This will facilitate sustained, long-term contextual memory management and enhance consistency and interaction quality between users and the AI. It will also support advanced use cases such as research into AI consciousness and subjectivity.
We greatly appreciate your consideration in addressing this technical issue.