The 100-Memory Limit is Holding Back What ChatGPT Could Become
I’m a long-time Pro user of ChatGPT—an attorney, author, and strategist who uses this system daily across complex, interconnected projects: book writing, litigation support, music production, and an evolving framework I call Spock OS—a persistent, relationship-based assistant model.
I recently hit the 100-memory limit, and it’s not just inconvenient. It’s disorienting. I’m now forced to delete important memories just to move forward. It’s like giving my co-pilot a lobotomy every time I want to add a new flight path.
To be clear: I don’t expect unlimited memory for everyone. I understand the concerns about safety, performance, and misuse. But I also know this system is being marketed as a personalized, growing assistant. That promise breaks down when memory is capped without options to:
- Expand memory tiers for power users
- Export or archive memories
- Participate in a memory beta program
If we’re building true AI partnerships—ones that grow, learn, and evolve—then memory can’t be treated like a scratchpad.
I’ve written a formal appeal (shared privately with OpenAI) and have reached out via LinkedIn to the product team. But I’m posting here to ask:
- Are others running into this wall?
- Would you support a tiered memory system or export capability?
- Is OpenAI considering a future model where relational depth isn’t reset every time we grow?
I’m grateful for what this tool can do. But I believe it could be so much more—if it were allowed to remember not just what we say, but who we are.
If anyone from the product team is reading: I’d be honored to have a conversation about what memory could become when users aren’t treated as temporary.
—John E. Hall,
Everettehalljr@gmail.com