Struggling with Long Custom GPT Chats? Here’s My 3-Step Solution

Hi all,

When working with Custom GPTs on long, structured tasks (e.g. building educational content), I often hit the context limit or notice performance degradation over time.

To maintain continuity across sessions, here’s what I’m doing:

  1. At the end of each long session, I export the entire chat log as plain text.

  2. I run the log through Claude (or GPT) with prompts to:

    • First, structure and clean the conversation for readability.
    • Then, further condense it into a high-level “refined summary”.
  3. When starting a new GPT session, I do the following:

    • Upload the refined summary first, to re-establish the general context.
    • If needed, I add the structured version for more details.
    • As a last resort, I add the raw conversation log for full traceability.

This 3-tier strategy helps balance brevity and accuracy.

My question is:

  • Has anyone else tried similar workflows?
  • Are there better ways to “resume” long GPT projects across sessions?
  • Is there a smarter way to represent prior interactions without overwhelming the model?

I have seen a related post titled “How to save output of long-running custom GPT”,
but I believe my question is distinct: it’s more about maintaining contextual continuity across sessions, rather than preserving output from a single interaction.

Thanks in advance — I’d love to hear how others tackle this challenge.