Hey devs!
Working with ChatGPT’s new MCP connectors and built something for memory persistence across sessions with the assistance of Codex. Curious about your thoughts on the approach.
The idea: Unlike ChatGPT’s built-in memory, this uses context AI-powered fact extraction with quality scoring to store only meaningful, atomic facts. Instead of losing context every new chat, it intelligently retrieves and injects only relevant context based on the current conversation.
Technical approach: Three-layer architecture with GPT-3.5 doing fact extraction/tagging, quality filtering, and hybrid vector + tag-based search for retrieval. Focus is on context quality over quantity. Built most of it with Codex which was pretty cool for rapid prototyping.
Looking for developer perspectives:
- How are MCP remote servers working with ChatGPT for you guys? (still wonky for me in comparison to Claude)
- Any performance concerns with this approach vs. built-in memory?
- Thoughts on using separate AI for context processing?
The MCP ecosystem is pretty new - interested in hearing what you guys think about it.
If you want to test it out or want more context → teamlayer.xyz