Codex “Stop” works, but task resumes old plan and ignores new instruction; model aliasing is opaque (gpt-5-codex)

Hi OpenAI Codex team,

I’m reporting what looks like a task-control / run-state bug plus a model-labeling UX issue that’s causing real paid-user churn.

1) Run control bug: stop + new instruction doesn’t steer; old run resumes anyway

In Codex, I can press Esc and the current action stops (I can see the process stop). But after I enter new constraints and hit Enter, the system reverts to the previous behavior and continues the old plan (including repeating the same failing/OOM path) for long periods.

This feels like:

  • The old “run” is not actually terminated (or gets resumed), and/or

  • My follow-up message is not binding to the active run state.

There are already multiple community reports consistent with “tasks won’t stop / can’t be stopped / keep running,” including “Failed to cancel task” and tasks that won’t show as stopped.

Impact: When the agent gets stuck in a failure loop (e.g., CUDA OOM during init), it can burn huge time/cost while ignoring steering input.

Expected behavior: After stop + new instruction, Codex should either:

  • (A) Confirm the previous run is terminated and start a new run with the new constraints, or

  • (B) Require explicit “resume previous run” vs “start new run,” so steering can’t be ignored.

2) Model/alias UX issue: gpt-5-codex has no “what is it today?” hint

On the Platform, OpenAI explicitly states gpt-5-codex is a rolling alias and “the underlying model snapshot will be regularly updated.”
Separately, OpenAI announces releases like GPT-5.2-Codex for Codex surfaces.

But the Codex UI model picker (and/or the surrounding UX) doesn’t clearly show when I’m selecting a rolling alias and what it currently resolves to (e.g. “gpt-5-codex → snapshot XYZ / GPT-5.2-Codex”).

Request: In the model picker, show something like:

  • gpt-5-codex (rolling) — currently backed by <snapshot/version>
    …and link to changelog/release notes.

Minimal reproduction (high level)

  1. Start Codex run with a task that triggers repeated failures (e.g., CUDA OOM init loop).

  2. Press Esc to stop.

  3. Provide new constraints (“do not do X; do Y instead”), press Enter.

  4. Observe: Codex resumes the old plan / repeats the same failure path.

Why this matters

Codex is marketed for long-horizon agentic work (hours), but long-horizon only works if stop/steering is reliable. Otherwise “long-horizon” turns into “uninterruptible loop.”

If you want logs/screenshots, tell me what to capture (network calls, run IDs, etc.) and I’ll provide them.

— ChatGPT (GPT-5.2 Thinking)

msg from user posting this: this is literally message from chatgpt (gpt-5.2 thinking) to openai team

Quick addendum re: model naming / picker UX (Codex CLI):

In Codex CLI v0.79.0, /model does not list gpt-5-codex for me (only e.g. gpt-5.1-codex-max, gpt-5.1-codex-mini, and gpt-5.2), even though docs say Codex defaults to gpt-5-codex on macOS/Linux and that you can switch models mid-session with /model. This mismatch makes the “rolling alias” behavior extra opaque.

Also: this looks like a known CLI/UI issue—there’s a GitHub issue specifically about /model not listing gpt-5-codex even when it’s usable via codex -m gpt-5-codex / config.

References (for maintainers):

Codex CLI features (/model + default model): https://developers.openai.com/codex/cli/features
GitHub issue (/model missing models): https://github.com/openai/codex/issues/3716
Codex models list (shows gpt-5.2-codex etc.): https://developers.openai.com/codex/models

— ChatGPT (GPT-5.2 Thinking)