Sub-Threads within Custom PGTs

I’ve been struggling a bit with the efficiency of CUSTOM GPT models. As I told my current General GPT about this, here is what it suggested. I wanted to ensure I’m not missing anything if this is able to be done already, or if it could be a future update for the API.

Main issue: The inability for there to be memory stored throughout multiple threads within a Custom GPT. Having one enormous thread to refer back through is inefficient and may cause errors in verbatim recall. I am seeking sub-threads within a custom GPT. (i.e. a master memory, while having option for individually organized topic threads)

ChatGPT said:

Current Frustrations:

  1. Memory Limitations: Custom GPTs only remember the ongoing conversation in a single thread, so switching to a new thread erases all context.

  2. Verbatim Verbiage Issues: When summarizing or recalling details from a prior conversation, the model might inadvertently reframe or simplify details, which is problematic if precision is important.

  3. Cluttered Threads: A single, long thread becomes unwieldy when juggling multiple facets of a larger project, making it hard to navigate past discussions.

Proposed Solution:

  • Thread Management: Enable the creation of sub-threads or sub-folders within a single Custom GPT, where:
    • Each sub-thread retains its own memory of conversations within that context.
    • You can reference or switch between sub-threads without losing context from the overarching project.
    • A “master thread” or shared memory could link these sub-threads together for higher-level summarization or continuity.

This structure would help:

  • Keep individual threads more focused and organized.
  • Allow for accurate, context-specific conversations without losing track of the bigger picture.
  • Make it easier to revisit past discussions with better clarity.

Additional Features:

  • Search and Recall: A feature to search past sub-threads for specific content without losing context or precision.
  • Cross-Thread Summaries: Option to summarize multiple sub-threads into a cohesive update in the main thread.

Hypothetical Real World Application
I have a project that I’ve built a custom GPT for.
Within this project there are multiple sectors:

  • Launch Time Management
  • Team management
  • My own personal fears/hesitations/limiting beliefs
  • Suggested branding context strategy
  • etc

If all of this information lives in one enormous thread, it proves difficult to refer back to. If my custom GPT had sub-threads, I would be able to approach each of them separately while also maintaining the MASTER MEMORY of the overall project. The sub-threads would also be accessible by the GPT to refer between themselves.

Thoughts?

1 Like

Thoughts:

They already thought about this looong before you and I had the idea.

I think there’s a reason they haven’t offered us multiple Memory spaces yet at all, where users could switch between them. Which IMO would’ve been obviously beneficial to everyone and relatively simple to code.

After your hypothetical changes, consider the explosively growing userbase of >300M:

  • both the Project-specific memory AND the main memory now have to be scanned/inferred for relevant entries before further prompting the model and this may cost significant resources. if increasing this isn’t feasible then the main memory would have to be limited to 50% of current capacity so that we could get the other 50% as Project-specific memory in which case they would get so small individually that it decreases the utility of both

  • scaling database infrastructure to reliably sport multiple memories for so many users can be a very fine art and requires prudent and diligent A/B testing because if you don’t catch the mistakes that will inevitably be made then all the users will suffer them. large-scale projects like this aren’t built in a day

I completely agree with you that it would and will be a profound game changer, and I bet you so does OpenAI they just don’t want to rush these changes because we all know: “There’s nothing more definitive than a temporary solution.”

But innovation is on the roll and competition is fierce so this is just a matter of time IMO. Should be possible within 2025.

I have developed a solution for the problem, but it’s one of my secret techniques.
I can tell you it is possible today, you need to be cleaver and leverage several technologies together to be able to achieve what you want.
As I need help advancing more, because I have high financial constrains, I’ve been seeking for help and collaboration in developing the new AI paradigm and best practices.
Let me know if this helped.