I’ve seen a lot of discussion here around long-term use of ChatGPT:
- context drift
- loss of continuity across sessions
- scope bleed between projects
- and unintentional anthropomorphism or authority creep
Most approaches I’ve seen focus on personas or memory tricks.
I’ve been working on a different direction: treating continuity as a deterministic, human-auditable system problem instead of a conversational one.
I’ve published a public reference specification that documents one architectural approach to long-horizon AI continuity using:
- explicit scope separation
- human-readable continuity packets
- deterministic boot order
- versioned snapshots instead of implicit memory
This is not a product, companion, or live system.
It’s a public architectural snapshot meant to:
- establish prior art
- document design principles
- provide a stable reference point for discussion
Important boundaries up front:
- This framework avoids personas by design
- It is not therapeutic or advisory
- It does not imply agency, identity, or consciousness
- It does not expose private or personal data
- It is intentionally incomplete as a public reference
I’m posting this to compare notes with others who are:
- doing long-horizon work with LLMs
- running into drift or scope bleed
- trying to keep systems boring, auditable, and safe over time
I’m especially interested in:
- failure modes others have encountered
- alternative continuity strategies
- places where this architecture might break down
If you’re building something similar, I’d love to hear how you’re approaching it.