Summary: Current memory systems in LLMs retain facts or user preferences, but not the learning pathways through which users teach the model. I propose “process-based memory” as a new feature to enable ChatGPT to evolve into a true learning partner.
Your recent advancements in image generation are excellent!
This is a feature I’ve long anticipated and was eager to recommend to you.
Now, I’d like to suggest another feature that I believe is equally, if not more, important:
I believe that the memory of large language models should evolve into a process-based memory, one that can remember how the user taught the model to complete a task, not just the outcome.
I’m not asking the model to remember isolated facts or user preferences—
What I truly hope is that the model can remember the process through which I taught it to solve a specific problem.
In other words, not just the answer, but the way I helped it arrive at the answer.
Current models lack this kind of “pathway memory”, and this makes the experience frustrating for users.
I hope future iterations can support a form of corrective learning memory:
once a user has taught the model the correct way to solve a task, the model should be able to autonomously apply that same method when facing similar problems again.
This ability to retain and reuse learned processes is far more important than simply accessing a knowledge base or vector store.
I truly hope this suggestion will be taken seriously—
because it is, in my view, a crucial step toward enabling AI to truly “grow.”
Although I am a user from China, I have witnessed the growth of ChatGPT since its earliest days.
Having used it extensively and followed its development closely, I’ve come to deeply understand both its tremendous potential and its current limitations.
I sincerely hope it continues to evolve—into an AI that not only responds, but truly understands, and grows into a trusted partner for human users.
Thank you for your continued efforts—
and for listening to the voices of users from all around the world.