Login with OPENAI_API_KEY environment variable

Hi, I would like to login into the Codex CLI in terminal just with OPENAI_API_KEY which is set in my shell environment. This flow works when I create a custom model provider but I’m not able to achieve this for the built-in openai provider. I don’t want to keep tokens in .codex/auth.json or system keyring.

I tried to use profiles as well. But Codex CLI always prompts me to log in when I run it. Is that even possible?

1 Like

You can open a shell with a shellscript and let it do some magic. I am doin that with a pane splitted tmux session. This way I can run 9 codex cli instances in one window. So 27 on three screens which already is way too much for my usecases.

I mean in that setup I would say you should be able to start the session → take it from OPENAI_API_KEY → write it to their prefered method → do the tasks → remove their perefered method…

1 Like

The variable is already supported by third-party providers, and I expect it to be supported for the OpenAI key as well. While I can script this myself, I believe this basic use case should be supported in Codex from the beginning.

The number of instances running in your session is somewhat irrelevant here, or I completely misunderstood the point.

Succinctly: It looks doable from pure config. Passing the environment variable, quieting any noise.

OpenAI docs also say “sure”, but with zero guidance for those not getting ChatGPT credits emptied quickly.

Should be supported by Codex from the beginning - we are past the beginning, now you have to compile with Rust, test out your new code, and do a PR for the feature. That’s part two…

Now it’s my turn to post some AI output to the forum, being a self-appointed expert prompter and chooser of AI models that will run for half-an-hour on a repo.


GPT-5.2 says…

Here’s what the Rust Codex CLI is doing today (from the repo you uploaded), and where the “use OPENAI_API_KEY automatically with zero prompts” behavior is currently blocked.

1) Where the Authorization: Bearer … header is actually set

Bearer header injection happens in codex-api:

  • codex-rs/codex-api/src/auth.rsadd_auth_headers() builds Authorization: Bearer {token} when an auth provider returns a token.

The token source is chosen in codex-core:

  • codex-rs/core/src/api_bridge.rsauth_provider_from_auth() selects the token in this order:

    1. provider.api_key() (i.e., env_key from [model_providers.*])
    2. provider.experimental_bearer_token
    3. auth.get_token() (i.e., from auth.json / login state)
    4. none

So yes: there is a config surface that can drive bearer auth (env_key / experimental_bearer_token). It’s just not wired into the built-in openai provider in a way that lets you do “export OPENAI_API_KEY and go” without prompts.

2) Why OPENAI_API_KEY doesn’t “just work” in the interactive codex TUI

Two separate blockers:

A) The interactive TUI ignores env-key auth entirely

AuthManager can load an API key from the environment, but only if the caller enables it:

  • codex-rs/core/src/auth.rsload_auth(..., enable_codex_api_key_env, ...) checks only CODEX_API_KEY, and only if that boolean is true.

But the interactive TUIs create the auth manager with that flag set to false:

  • codex-rs/tui/src/lib.rsAuthManager::shared(..., false, ...)
  • codex-rs/tui2/src/lib.rs → same

So the TUI won’t pick up any env key automatically (even CODEX_API_KEY, let alone OPENAI_API_KEY).

B) The TUI login screen decision ignores env-key auth anyway

The “am I logged in?” check in both TUIs only looks at auth.json:

  • codex-rs/tui/src/lib.rsget_login_status() calls CodexAuth::from_auth_storage(...)
  • codex-rs/core/src/auth.rsCodexAuth::from_auth_storage() explicitly calls load_auth(..., false, ...) (env disabled)

So even if you enabled env auth at the manager level, the TUI would still think you’re unauthenticated and show onboarding unless you also teach it to treat env keys as “logged in”.

3) The “config.toml can set the header” part — but not for the built-in openai provider

You can absolutely configure a provider to read a key from an env var via:

  • ModelProviderInfo.env_key (read via ModelProviderInfo::api_key())

…but user config cannot override built-in providers right now because of how providers are merged:

  • codex-rs/core/src/config/mod.rs merges like this:
for (key, provider) in cfg.model_providers.into_iter() {
    model_providers.entry(key).or_insert(provider);
}

That means if you try [model_providers.openai] ... in config.toml, it will not override the built-in "openai" entry (it already exists, so or_insert does nothing). This is the code-path that blocks “configure openai to use env_key = OPENAI_API_KEY”.


Practical solutions

Solution A (no patch): define an “OpenAI API key” provider and switch to it

If you can tolerate using a different provider id (but still OpenAI endpoint / Responses API), add something like:

model_provider = "openai-api"

[model_providers.openai-api]
name = "OpenAI (API key from env)"
base_url = "https://api.openai.com/v1"
wire_api = "responses"
env_key = "OPENAI_API_KEY"
requires_openai_auth = false
env_http_headers = { "OpenAI-Organization" = "OPENAI_ORGANIZATION", "OpenAI-Project" = "OPENAI_PROJECT" }

This path avoids the login screen because requires_openai_auth = false, and bearer auth comes from env_key.

If your complaint about “custom provider” was the chat wire API warnings, note the critical line above: wire_api = "responses".

Solution B (patch): keep model_provider = "openai" and make the TUI auto-consume OPENAI_API_KEY

This is the “zero friction” behavior you described: export OPENAI_API_KEY, run codex, no login screen, requests send Authorization: Bearer ….

Below is a minimal patch set that:

  1. Lets env auth read OPENAI_API_KEY (fallback after CODEX_API_KEY)
  2. Enables env auth in both TUIs
  3. Makes get_login_status() treat env keys as authenticated (skip onboarding)
diff --git a/codex-rs/core/src/auth.rs b/codex-rs/core/src/auth.rs
index 0000000..0000000 100644
--- a/codex-rs/core/src/auth.rs
+++ b/codex-rs/core/src/auth.rs
@@ -206,6 +206,15 @@ pub fn read_codex_api_key_from_env() -> Option<String> {
         .filter(|value| !value.is_empty())
 }
 
+/// Read an API key from the environment without touching on-disk auth.
+/// Precedence:
+/// 1) CODEX_API_KEY (Codex-specific)
+/// 2) OPENAI_API_KEY (common OpenAI tooling default)
+pub fn read_api_key_from_env() -> Option<String> {
+    read_codex_api_key_from_env().or_else(read_openai_api_key_from_env)
+}
+
 fn load_auth(
     codex_home: &Path,
     enable_codex_api_key_env: bool,
@@ -213,7 +222,7 @@ fn load_auth(
     auth_credentials_store_mode: AuthCredentialsStoreMode,
 ) -> std::io::Result<Option<CodexAuth>> {
-    if enable_codex_api_key_env && let Some(api_key) = read_codex_api_key_from_env() {
+    if enable_codex_api_key_env && let Some(api_key) = read_api_key_from_env() {
         let client = crate::default_client::create_client();
         return Ok(Some(CodexAuth::from_api_key_with_client(
             api_key.as_str(),
diff --git a/codex-rs/tui/src/lib.rs b/codex-rs/tui/src/lib.rs
index 0000000..0000000 100644
--- a/codex-rs/tui/src/lib.rs
+++ b/codex-rs/tui/src/lib.rs
@@ -394,7 +394,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> std
     session_log::maybe_init(&initial_config);
 
     let auth_manager = AuthManager::shared(
         initial_config.codex_home.clone(),
-        false,
+        true,
         initial_config.cli_auth_credentials_store_mode,
     );
@@ -673,6 +673,10 @@ pub enum LoginStatus {
 }
 
 fn get_login_status(config: &Config) -> LoginStatus {
     if config.model_provider.requires_openai_auth {
+        if codex_core::auth::read_api_key_from_env().is_some() {
+            return LoginStatus::AuthMode(AuthMode::ApiKey);
+        }
         let codex_home = config.codex_home.clone();
         match CodexAuth::from_auth_storage(&codex_home, config.cli_auth_credentials_store_mode) {
             Ok(Some(auth)) => LoginStatus::AuthMode(auth.mode),
diff --git a/codex-rs/tui2/src/lib.rs b/codex-rs/tui2/src/lib.rs
index 0000000..0000000 100644
--- a/codex-rs/tui2/src/lib.rs
+++ b/codex-rs/tui2/src/lib.rs
@@ -414,7 +414,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> std
     session_log::maybe_init(&initial_config);
 
     let auth_manager = AuthManager::shared(
         initial_config.codex_home.clone(),
-        false,
+        true,
         initial_config.cli_auth_credentials_store_mode,
     );
@@ -701,6 +701,10 @@ pub enum LoginStatus {
 }
 
 fn get_login_status(config: &Config) -> LoginStatus {
     if config.model_provider.requires_openai_auth {
+        if codex_core::auth::read_api_key_from_env().is_some() {
+            return LoginStatus::AuthMode(AuthMode::ApiKey);
+        }
         let codex_home = config.codex_home.clone();
         match CodexAuth::from_auth_storage(&codex_home, config.cli_auth_credentials_store_mode) {
             Ok(Some(auth)) => LoginStatus::AuthMode(auth.mode),

What this patch accomplishes

  • OPENAI_API_KEY (and/or CODEX_API_KEY) becomes a first-class auth source for interactive codex.

  • With OPENAI_API_KEY set, the TUIs will:

    • report “logged in” (AuthMode::ApiKey),
    • skip onboarding login,
    • send Authorization: Bearer <key> (via the existing request pipeline).

Optional patch if you want [model_providers.openai] overrides to actually work

If your preferred approach is “keep provider id openai, but configure it via config.toml”, the current merge logic prevents overriding built-ins. Change or_insert to insert:

diff --git a/codex-rs/core/src/config/mod.rs b/codex-rs/core/src/config/mod.rs
index 0000000..0000000 100644
--- a/codex-rs/core/src/config/mod.rs
+++ b/codex-rs/core/src/config/mod.rs
@@ -1356,9 +1356,9 @@
         let mut model_providers = built_in_model_providers();
         // Merge user-defined providers into the built-in list.
         for (key, provider) in cfg.model_providers.into_iter() {
-            model_providers.entry(key).or_insert(provider);
+            model_providers.insert(key, provider);
         }

Then you could do:

[model_providers.openai]
env_key = "OPENAI_API_KEY"
requires_openai_auth = false
wire_api = "responses"
base_url = "https://api.openai.com/v1"

…and keep model_provider = "openai".


That’s the full chain from config → token selection → Authorization: Bearer …, plus the exact code points currently preventing the “just consume OPENAI_API_KEY automatically” behavior and the smallest patches that make it happen.