Hi, I would like to login into the Codex CLI in terminal just with OPENAI_API_KEY which is set in my shell environment. This flow works when I create a custom model provider but I’m not able to achieve this for the built-in openai provider. I don’t want to keep tokens in .codex/auth.json or system keyring.
I tried to use profiles as well. But Codex CLI always prompts me to log in when I run it. Is that even possible?
You can open a shell with a shellscript and let it do some magic. I am doin that with a pane splitted tmux session. This way I can run 9 codex cli instances in one window. So 27 on three screens which already is way too much for my usecases.
I mean in that setup I would say you should be able to start the session → take it from OPENAI_API_KEY → write it to their prefered method → do the tasks → remove their perefered method…
The variable is already supported by third-party providers, and I expect it to be supported for the OpenAI key as well. While I can script this myself, I believe this basic use case should be supported in Codex from the beginning.
The number of instances running in your session is somewhat irrelevant here, or I completely misunderstood the point.
Succinctly: It looks doable from pure config. Passing the environment variable, quieting any noise.
OpenAI docs also say “sure”, but with zero guidance for those not getting ChatGPT credits emptied quickly.
Should be supported by Codex from the beginning - we are past the beginning, now you have to compile with Rust, test out your new code, and do a PR for the feature. That’s part two…
Now it’s my turn to post some AI output to the forum, being a self-appointed expert prompter and chooser of AI models that will run for half-an-hour on a repo.
GPT-5.2 says…
Here’s what the Rust Codex CLI is doing today (from the repo you uploaded), and where the “use OPENAI_API_KEY automatically with zero prompts” behavior is currently blocked.
1) Where the Authorization: Bearer … header is actually set
Bearer header injection happens in codex-api:
codex-rs/codex-api/src/auth.rs → add_auth_headers() builds Authorization: Bearer {token} when an auth provider returns a token.
The token source is chosen in codex-core:
codex-rs/core/src/api_bridge.rs → auth_provider_from_auth() selects the token in this order:
provider.api_key() (i.e., env_key from [model_providers.*])
provider.experimental_bearer_token
auth.get_token() (i.e., from auth.json / login state)
none
So yes: there is a config surface that can drive bearer auth (env_key / experimental_bearer_token). It’s just not wired into the built-in openai provider in a way that lets you do “export OPENAI_API_KEY and go” without prompts.
2) Why OPENAI_API_KEY doesn’t “just work” in the interactive codex TUI
Two separate blockers:
A) The interactive TUI ignores env-key auth entirely
AuthManagercan load an API key from the environment, but only if the caller enables it:
codex-rs/core/src/auth.rs → load_auth(..., enable_codex_api_key_env, ...) checks only CODEX_API_KEY, and only if that boolean is true.
But the interactive TUIs create the auth manager with that flag set to false:
So even if you enabled env auth at the manager level, the TUI would still think you’re unauthenticated and show onboarding unless you also teach it to treat env keys as “logged in”.
3) The “config.toml can set the header” part — but not for the built-in openai provider
You can absolutely configure a provider to read a key from an env var via:
ModelProviderInfo.env_key (read via ModelProviderInfo::api_key())
…but user config cannot override built-in providers right now because of how providers are merged:
codex-rs/core/src/config/mod.rs merges like this:
for (key, provider) in cfg.model_providers.into_iter() {
model_providers.entry(key).or_insert(provider);
}
That means if you try [model_providers.openai] ... in config.toml, it will not override the built-in "openai" entry (it already exists, so or_insert does nothing). This is the code-path that blocks “configure openai to use env_key = OPENAI_API_KEY”.
Practical solutions
Solution A (no patch): define an “OpenAI API key” provider and switch to it
If you can tolerate using a different provider id (but still OpenAI endpoint / Responses API), add something like:
OPENAI_API_KEY (and/or CODEX_API_KEY) becomes a first-class auth source for interactive codex.
With OPENAI_API_KEY set, the TUIs will:
report “logged in” (AuthMode::ApiKey),
skip onboarding login,
send Authorization: Bearer <key> (via the existing request pipeline).
Optional patch if you want [model_providers.openai] overrides to actually work
If your preferred approach is “keep provider id openai, but configure it via config.toml”, the current merge logic prevents overriding built-ins. Change or_insert to insert:
diff --git a/codex-rs/core/src/config/mod.rs b/codex-rs/core/src/config/mod.rs
index 0000000..0000000 100644
--- a/codex-rs/core/src/config/mod.rs
+++ b/codex-rs/core/src/config/mod.rs
@@ -1356,9 +1356,9 @@
let mut model_providers = built_in_model_providers();
// Merge user-defined providers into the built-in list.
for (key, provider) in cfg.model_providers.into_iter() {
- model_providers.entry(key).or_insert(provider);
+ model_providers.insert(key, provider);
}
That’s the full chain from config → token selection → Authorization: Bearer …, plus the exact code points currently preventing the “just consume OPENAI_API_KEY automatically” behavior and the smallest patches that make it happen.