Suggestion: Decoupling Reasoning Execution and User Input in Codex (Async Context Injection)
I’d like to propose an idea regarding Codex’s current interaction model, especially for long-running or agent-style tasks.
At the moment, Codex behaves in a largely single-threaded way from a user interaction perspective: once a task starts running, the context is effectively frozen. Any new information, clarification, or constraint provided by the user can only be applied after the current execution finishes or is restarted.
This creates friction in scenarios where:
-
Tasks are long-running
-
Requirements evolve during execution
-
The user notices missing constraints midway
-
Codex is used more like an autonomous agent than a one-shot tool
Proposed idea
Decouple reasoning/execution from user information input, allowing asynchronous context injection while the agent is running.
Conceptually:
-
Maintain two logical channels:
-
A reasoning / execution channel that continues uninterrupted
-
A user information channel that allows appending new context at any time
-
-
Newly injected information would be buffered and tagged, rather than immediately interrupting execution
-
The agent could decide when to incorporate the new information (e.g. at safe checkpoints), based on relevance or priority
Why this might help
-
Feels closer to real human collaboration
-
Reduces the cost of restarting long tasks
-
Better fits agent-style workflows where thinking and communication are not strictly sequential
I’m curious whether others here have run into similar limitations, or have thoughts on how something like this could (or couldn’t) work in practice.
This post was written and translated with the help of ChatGPT.
The original idea and proposal are by u679c