Auto Compression Not Triggering – Codex Still Runs Out of Context Window

Codex currently tends to exhaust the content window very quickly. I’m using GPT-5.4 and have set the token limit to 1M, but after just a single round of conversation it shows:
“Codex ran out of room in the model’s context window. Start a new thread or clear earlier history before retrying.”
I also enabled automatic compression and even set a limit smaller than 1M, but it doesn’t seem to compress automatically, and the issue still persists.

1 Like

And what do you have config.toml set to…?

1 Like

What the hell…

Unsupported Model
Quotes Are Invalid

Unrealistic Context Settings
Codex already knows the context window for built-in models.
These overrides are intended only when using custom providers.
Setting them this high can cause memory thrashing and constant compaction.

Remove them entirely unless you are using a custom model provider.

Token Limit Misuse
tool_output_token_limit = 32000

MCP Server Section Is Broken
The correct way

[mcp_servers.playwright]
command = "npx"
args = ["@playwright/mcp@latest"]

TBH IDK HOW YOUR CLIENT EVEN LETS YOU OPEN A CHAT OR THE SETTINGS WITHOUT ERRORING.

1 Like

Shall I continue? Because about every single is incorrect.

I am using a Pro account. Will this have any impact?

PROTECTED

sandbox = "restricted"
sandbox = "workspace-write"
sandbox = "danger-full-access"

FULL ACCESS
sandbox = "danger-full-access"

No your settings are completely out of syntax, I really do not know how it isn’t making your client unusable with an error screen on thread or settings load.

I’m not very sure either, because even though I set it up this way, it can still function and be used, and it can also support fast and 1M token.

Where is this config.toml located? There is no way it’s actually loading and referencing that one.

That looks like it’s in the correct location/loading the standard config. But that freaking wild, theres no way, send me the whole config.

Carefully. Don’t leak anything or any tokens.

EDIT
I do have to leave and will be back in the office in like 45 minutes so hang tight I’ll get you squared away.

1 Like

personality = “friendly”
model = “gpt-5.4”
model_reasoning_effort = “xhigh”
tool_output_token_limit = 128000
model_context_window = 1050000
model_auto_compact_token_limit = 1000000

mcp_servers.playwright

args = [“@playwright/mcp@latest”]
command = “npx”

windows

sandbox = “elevated”

personality = "friendly"
model = "gpt-5.3-codex"
model_reasoning_effort = "high"
tool_output_token_limit = 128000
sandbox = "danger-full-access"

[mcp_servers.playwright]
command = "npx"
args = ["@playwright/mcp@latest"]

BRB

Back buddy, hopefully is all working. I can walk you through some debugging though to ensure everything is loading properly!

I hope you can do more testing, since this issue may not be very stable. It is already working normally in another project of mine using the same configuration.

Can you elaborate? The updated script is or is not working correctly?

I’m using my own configuration again, and so far that weird issue we saw earlier hasn’t happened again.

Do me a favor and launch Codex App from the terminal window.
And send me the log if it throws any errors, they’ll be displayed in the terminal.

Trust me buddy, your config is not correct at all, it must be loading a different config file.

EDIT

Get-ChildItem -Path $HOME -Recurse -Filter config.toml -ErrorAction SilentlyContinue - Search for other configs.

echo $env:CODEX_CONFIG - Print out if you have an environment variable set.

Hello, I did configure it like that, and it actually did take effect.

1 Like