Title: Enable ChatGPT agents to converse across threads via user-authorized agent-to-agent queries
Feature request:
Allow a ChatGPT conversation to access context from a previous conversation by interacting with another ChatGPT agent tied to that earlier thread, with the user’s permission. This would enable new threads to continue prior discussions without losing clarity, nuance, or continuity—without requiring the user to manually summarize and paste context.
Problem:
When a conversation becomes long, it eventually runs into performance issues:
- Latency increases in both UI and backend
- Token limits truncate earlier content
- Browsing or searching long threads becomes difficult, especially on mobile
To work around this, users often split a discussion into new threads. But this creates a major problem: ChatGPT loses access to the reasoning and decisions in the earlier conversation, unless the user takes the time to summarize or copy everything over manually—which is error-prone and burdensome.
Proposed solution:
Let users authorize a new ChatGPT thread to query a previous thread conversationally via an inter-agent interaction.
For example:
“Continue discussion on the morbidity model from [link to thread]. Focus now on correlates of lifestyle choices.”
The new ChatGPT instance would then initiate a dialogue with the old one:
ChatGPT: “In the previous thread, what assumptions did we make about age-adjusted baseline risk?”
Old ChatGPT: “We used CDC data from 2021 and assumed a log-linear baseline risk curve. Would you like the equation again?”
ChatGPT: “Yes, and remind me how we handled missing lifestyle data?”
ChatGPT: “We assumed MAR and used multiple imputation with 5 replicates. Do you want the imputation rationale?”
This back-and-forth exchange between the agents allows the new session to build an incremental adaptive model of prior context—something no static summary or retrieval chunk can match.
Key properties:
- Users must explicitly permit this interaction, specifying the source thread(s)
- Agents converse briefly or persistently, depending on need
- If persistent querying is too expensive, users can manually instruct the new thread to “ask the old thread about X”
- The new agent can interpret, rephrase, or critique prior reasoning
- Users remain in control of what’s imported or trusted
Why this matters:
- Less work for the user
- Preserves logical continuity across sessions
- Enables modular, long-term projects without context decay
- Supports clarification, disagreement, and refinement between agents
- Leverages ChatGPT’s strengths as a conversational partner, not just a retriever
Conclusion:
This isn’t just context reuse—it’s cooperative cognition between AI agents, scoped and directed by the user. It unlocks richer, more flexible workflows without UI clutter or memory bottlenecks.