Model Fatigue? How to Ask Codex to Run a Subagent for a Subtask?

I am running an experiment to see whether Codex can read a bunch of reports and compile insights for each category.

In AGENTS.md, I instructed Codex to organize the reports into categories and create one file per category. I also asked it to review each category file it created and produce insights for that category, following a scoring system (rubric) and the specific insight types and writing style I outlined.

It organized the reports correctly and created the category files. However, the insights were very poor.

If I take the same AGENTS.md prompt and feed ChatGPT exactly what Codex gathered, one category file at a time, the output is much better. The difference in quality is night and day.

It seems like Codex is experiencing model fatigue, so I want to ask it to process one category at a time by spawning a sub-agent for each category rather than handling all categories in a single pass.

Is there a way to do that?

Thanks!

2 Likes

Have you tried gpt-5.2? for non coding tasks, the non codex models might align better with your expectations. The only cons is that it is much slower in comparison though.

2 Likes

See: Subagent Support · Issue #2604 · openai/codex · GitHub
by Git-on-my-level (David Zhang) · GitHub

You are running inside the Codex CLI and need to orchestrate additional headless Codex instances (“subagents”) to tackle work individually or in parallel. Keep these lessons in mind:

  1. Launch subagents with `codex --yolo exec "your prompt"`; always quote/escape anything that the shell might interpolate (avoid unescaped backticks or `$()`).
  2. When spawning subagents via a shell tool call, override the wrapper timeout so each run can last up to 30 minutes, e.g. `timeout_ms: 1800000`.
  3. Parallel runs can be started with background jobs (`& … & wait`), but the wrapper may still report exit code 124 if the combined command exceeds the timeout; inspect each subagent’s log to confirm whether it completed.
  4. Subagent sessions inherit CLI defaults (e.g., approval policy, sandbox mode may still show as read-only), so plan prompts accordingly and keep them lightweight when possible.

No I haven’t! That’s a good idea!

That’s an interesting instruction to try!

Is there a flag to read prompt from a file? That might be better since it’s going to be a very long prompt if the main agent hand off all the report and prompts with yolo mode.

You should considered using MCP.

you can ask it to build a shell script that starts a tmux session with a pane split where in each pane runs a subagent.. the subagents need a git mcp to grab next issue and then you just do project management in git issues…