Feature Suggestion: Per-session memory for GPTs via natural-language state tracking
As someone who develops real-world tools and integrations using GPTs, I’d like to suggest a powerful feature that could dramatically improve how GPTs behave — especially for non-technical users.
Problem
Currently, GPTs can’t track or update dynamic internal “state” throughout a session — unless the user manually re-describes everything at each turn. There’s no memory for things like:
- The contents of an inventory in an RPG
- User-defined variables like
goal = "finish report"
- Stateful simulations or decision trees
GPT understands context, but forgets state unless it’s re-injected manually.
Proposed Solution
Allow GPTs to remember and update simple per-session variables, defined via natural language.
Example user instruction:
“The hero has an inventory with 4 slots. Right now, it contains only a map.”
GPT already understands this. It could interpret:
inventory = ["map", null, null, null]
Then during the session:
- “I pick up a sword” → adds to next free slot
- “I drop the map” → updates inventory
- “Show me my inventory” → “You are carrying: sword.”
No code, no JSON. Just natural language → dynamic memory.
Use Cases
This would enable:
- Text-based games and interactive fiction
- Custom tutors and learning environments
- Role-based GPTs with evolving dialogue logic
- Project tracking assistants
- And creative tools for non-programmers
Competitive Advantage
As of now, no major platform offers this. But it’s only a matter of time before someone does — and whoever gets there first will open up an entirely new class of GPT-based applications.
OpenAI has all the components already:
Natural language parsing
Session context
Instruction-following
…the only missing piece is session-level memory storage for these variables.
Would love to see what others think — and whether OpenAI is considering something like this.