Managing Conversation History for Advanced AI Projects Beyond RAG Solutions (No Code)

Hello OpenAI Community Gods !

I’m working on a large scale project using a new technique I’ve developed called Synthetic Voting Ensembles (SVE), and I’m facing challenges with GPT’s Plus conversation memory retention. Importantly, I’m deliberately not using “RAG” systems as my workflow requires maintaining continuous conversational context rather than document retrieval. I found a sweet spot output token size on a GPT model that is providing me 97-100% accuracy every time.

About My Approach (Without revealing new IP details)

My SVE technique involves multiple model with another L-LLM I have created instances working collaboratively with a specialized voting mech to achieve top notch results for complex tasks needing insane language accuracy.

This approach:

  • Requires maintaining consistent conversational memory across multiple sessions

  • Depends on referencing specific past exchanges to inform current decision making

  • Needs reliable access to historical conversation data for the ensemble voting process

Due to the proprietary nature of my work, but I’m sure OpenAI has it, I can’t share all implementation details, but the memory retention challenge is a critical bottleneck for me.

  • So I have questions about GPT’s plus or pro conversation history and capacity…
  1. How extensive is GPT’s plus and pro conversation history storage for Plus users? Is there a documented limit to how many past conversations are retained and accessible ?

  2. Can conversation retention be prioritized? Is there any way to mark certain conversations as “critical” so they aren’t pushed out when new conversations accumulate ?

  • Potential Solutions I’m Considering

Memory Prioritization System

  • Has anyone developed techniques to ensure certain key conversations remain accessible long term ? This would be essential for maintaining the voting record in my ensemble system.

  • Conversation Preservation Architecture

Rather than traditional external vector databases “which wouldn’t suit my specific needs and destroy accuracy,” I’m considering a specialized system that preserves the conversational format while maintaining accessibility for my SVE framework.

  • Custom Retention Mechanisms

I’m exploring options to develop a method that would work alongside GPT’s existing memory system to enhance retention of critical conversations that feed into my ensemble voting process.

Has anyone ???

  • Measured the effective conversation history retention capacity of GPT Plus ?

  • Developed techniques to preserve specific conversations for ongoing reference ?

Created innovative approaches to conversation memory management beyond the standard old RAG paradigm ?

Since my SVE technique operates differently from conventional LLM applications, traditional solutions may not apply. I’d appreciate hearing from anyone who has explored the boundaries of GPT’s conversation retention capabilities or developed great memory management approaches.

Thank you for any insights you can share while respecting that specific details of my SVE implementation remain proprietary. :100: