Right now, when using Codex (or other task-driven flows), the model executes tasks in a “fire-and-forget” manner.
For example, if I ask the model to generate a 5-step task list and start working through them, the process continues until completion without any chance to adjust or correct the direction along the way.
The only current option is to stop the entire process, which often wastes progress. By the time the task completes, it may already be too late to add feedback or small course corrections.
Suggestion:
Introduce a way for the model to periodically pause and check for user feedback.
For example:
-
During a long-running or multi-step process, the model could prompt: “Do you want to adjust anything before I continue?”
-
Users could provide inline comments/feedback mid-task (like “skip step 3” or “change step 4 to use method B instead”).
-
Developers could configure how often the model checks for this input (every N steps, after subtasks, or time-based).
This would make Codex much more interactive and aligned with how humans works collaboratively, with constant feedback loops rather than only at the end.
I believe this would prevent wasted computation and help users feel more in control during complex workflows.