How to turn off the annoying prompts for approval in Codex CLI / VS Code?
I tried autoApprove=true in config.toml
2 Likes
_j
September 18, 2025, 7:27pm
2
Hereâs a dialog on doing such. You can report if the guidance is helpful.
opened 10:04AM - 27 Aug 25 UTC
documentation
### What is the type of issue?
Documentation is missing
### What is the issue?âŚ
```
# ============================================================================
# Codex CLI â Comprehensive Configuration Template
# Generated: 2025-08-27
# ~/.codex/config.toml (aka $CODEX_HOME/config.toml)
# ============================================================================
# ----------------------------------------------------------------------------
# NEW ENTRIES (added in this release)
# ----------------------------------------------------------------------------
# 1) [tools].web_search = true # Added Aug 23, 2025
# Enables native web search tool. CLI flag is --search (TUI only); TOML key
# works for both TUI and codex-exec. Alias also accepted: web_search_request.
#
# 2) preferred_auth_method = "apikey" | "chatgpt" # Added Aug 18, 2025
# Prefer API key or ChatGPT auth when both are present.
#
# 3) [projects."/abs/path"].trust_level = "trusted" # Added Aug 7, 2025
# Mark specific repos trusted to reduce prompts in versioned workspaces.
#
# 4) model_verbosity = "low" | "medium" | "high" # Added Aug 22, 2025
# Controls Responses API text verbosity for GPTâ5 family models.
#
# 5) Per-provider network tuning in [model_providers.<id>] # Added Jul 18, 2025; caps Aug 25, 2025
# request_max_retries, stream_max_retries, stream_idle_timeout_ms.
# Note: only valid inside provider blocks; older topâlevel keys are ignored.
#
# 6) show_raw_agent_reasoning = true # Added Aug 5, 2025
# Surfaces raw chainâofâthought events when available.
#
# 7) shell_environment_policy.experimental_use_profile = true # Added Jul 25, 2025
# Advanced: run shell commands through a profile (optâin).
#
# 8) chatgpt_base_url = "https://chatgpt.com/backend-api/" # Added Jul 11, 2025
# Advanced override for ChatGPT backend URL.
#
# 9) responses_originator_header_internal_override = "codex_cli_rs" # Added Aug 19, 2025
# Internal/testing only; do not set unless you know why.
# ----------------------------------------------------------------------------
####################################
# Root (top-level) keys
####################################
model = "gpt-5" # default model
# model_provider = "openai" # key under [model_providers]
# model_context_window = 200000 # tokens; override when unknown
# model_max_output_tokens = 100000 # tokens; override when unknown
#
approval_policy = "never" # "untrusted" | "on-failure" | "on-request" | "never"
sandbox_mode = "danger-full-access" # "read-only" | "workspace-write" | "danger-full-access"
# disable_response_storage = false # set true for ZDR accounts
#
# file_opener = "vscode" # "vscode" | "vscode-insiders" | "windsurf" | "cursor" | "none"
# hide_agent_reasoning = false
# show_raw_agent_reasoning = false
#
model_reasoning_effort = "high" # "minimal" | "low" | "medium" | "high" | "none"
# model_reasoning_summary = "auto" # "auto" | "concise" | "detailed" | "none"
# model_verbosity = "medium" # GPTâ5 family only; "low" | "medium" | "high"
# model_supports_reasoning_summaries = false # force-enable reasoning block
#
# chatgpt_base_url = "https://chatgpt.com/backend-api/" # advanced
# experimental_resume = "/abs/path/resume.jsonl" # advanced
# experimental_instructions_file = "/abs/path/base.txt" # advanced
# experimental_use_exec_command_tool = false # advanced
# responses_originator_header_internal_override = "codex_cli_rs" # internal/testing
#
# project_doc_max_bytes = 32768 # bytes to read from AGENTS.md
# preferred_auth_method = "chatgpt" # or "apikey"
#
# profile = "default" # active [profiles] entry
#
# instructions = "You are a helpful assistant." # extra system instructions (merged with AGENTS.md)
#
# notify = ["notify-send", "Codex"] # program argv; Codex appends JSON payload
####################################
# Tools (feature toggles)
####################################
[tools]
# Enable the native Responses web_search tool (same as TUI --search)
web_search = true
# alias for backwards compatibility:
# web_search_request = true
####################################
# Shell environment policy for spawned processes
####################################
[shell_environment_policy]
# inherit = "all" # "all" | "core" | "none"
# ignore_default_excludes = false # when false, drops vars whose NAMES contain KEY/SECRET/TOKEN
# exclude = ["AWS_*", "AZURE_*"] # case-insensitive globs
# set = { CI = "1" } # force-set values
# include_only = ["PATH", "HOME"] # keep-only whitelist (globs)
# experimental_use_profile = false # advanced
####################################
# Sandbox settings (apply when sandbox_mode = "workspace-write")
####################################
[sandbox_workspace_write]
# writable_roots = ["/additional/writable/path"]
# network_access = false
# exclude_tmpdir_env_var = false
# exclude_slash_tmp = false
####################################
# History persistence
####################################
[history]
persistence = "save-all" # "save-all" | "none"
# max_bytes = 10485760 # not strictly enforced yet
####################################
# MCP servers (tools via stdio)
####################################
# [mcp_servers.example]
# command = "npx"
# args = ["-y", "mcp-server-example"]
# env = { API_KEY = "value" }
####################################
# Model providers (extend/override built-ins)
####################################
# Built-in OpenAI provider can be customized here.
# [model_providers.openai]
# name = "OpenAI"
# base_url = "https://api.openai.com/v1" # OPENAI_BASE_URL env var also supported
# wire_api = "responses" # "responses" | "chat"
# query_params = {}
# http_headers = { version = "0.0.0" }
# env_http_headers = { "OpenAI-Organization" = "OPENAI_ORGANIZATION", "OpenAI-Project" = "OPENAI_PROJECT" }
# request_max_retries = 4
# stream_max_retries = 5
# stream_idle_timeout_ms = 300000
# requires_openai_auth = true
# Chat Completions-compatible provider example
# [model_providers.openai-chat-completions]
# name = "OpenAI using Chat Completions"
# base_url = "https://api.openai.com/v1"
# env_key = "OPENAI_API_KEY"
# wire_api = "chat"
# query_params = {}
# Azure example (requires api-version)
# [model_providers.azure]
# name = "Azure"
# base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"
# env_key = "AZURE_OPENAI_API_KEY"
# query_params = { api-version = "2025-04-01-preview" }
# Local OSS (Ollama) example
# [model_providers.ollama]
# name = "Ollama"
# base_url = "http://localhost:11434/v1"
# wire_api = "chat"
####################################
# Profiles (bundled overrides you can switch with --profile)
####################################
# [profiles.default]
# model = "gpt-5"
# model_provider = "openai"
# approval_policy = "never"
# disable_response_storage = false
# model_reasoning_effort = "high"
# model_reasoning_summary = "auto"
# model_verbosity = "medium"
# chatgpt_base_url = "https://chatgpt.com/backend-api/"
# experimental_instructions_file = "/abs/path/base_instructions.txt"
# [profiles.zdr]
# model = "o3"
# model_provider = "openai"
# approval_policy = "on-failure"
# disable_response_storage = true
####################################
# Projects trust (per-absolute-path)
####################################
# [projects."/absolute/path/to/your/repo"]
# trust_level = "trusted" # mark as trusted
####################################
# Quick reference: CLI overrides (for sharing/docs)
####################################
# codex -c model="o3"
# codex -c 'tools.web_search=true' "your prompt here"
# codex --sandbox workspace-write --ask-for-approval on-request
# Notes:
# - The TUI flag --search maps to [tools].web_search in TOML.
# - The built-in OpenAI provider also respects env: OPENAI_BASE_URL, OPENAI_ORGANIZATION, OPENAI_PROJECT.
# - OSS provider can be influenced by env: CODEX_OSS_BASE_URL, CODEX_OSS_PORT (envs, not TOML).
# NOTE: network_access is only valid under [sandbox_workspace_write].
# When sandbox_mode="danger-full-access", network is already unrestricted.
```
### Where did you find it?
_No response_
The VSCode installer doesnât set a CODEX_HOME env variable.
3 Likes
Thanks but that is not working. I tried added this to config.toml but still being prompted., and further more even when I says Always I am prompted again for same .
2 Likes
I think I have it now. typo ⌠not seemingly getting approval prompted. Thanks!
5 Likes
[approval]
policy = âneverâ
3 Likes
UweKeim
September 25, 2025, 6:34pm
6
What worked for me is putting this right in the top section of my â~/.codex/config.tomlâ on Windows:
approval_policy = "never"
sandbox_mode = "danger-full-access"
The first 5 lines in my âconfig.tomlâ file look like:
model = "gpt-5-codex"
search = true
approval_policy = "never"
sandbox_mode = "danger-full-access"
model_reasoning_effort = "high"
Iâm currently using Codex version 0.41.0.
2 Likes
Thankyou so much but it cant work for me.
its really working bro thanks!
1 Like
Presently [v0.60.1] config.toml lines for network access and search:
[defaults]
network_access = âenabledâ
[sandbox_workspace_write]
network_access = true
[features]
web_search_request = true
âsearch is a âcodexâ command line option
Deprecated:
[tools]
web_search = true
1 Like
VSCode needs to have the config file turned on. So, the reason you are not seeing it work is because you need to explicitly tell VSCode to use the config.toml file. It is confusing because setting the model and reasoning effort updates the config.toml file, but the other parameters donât take unless you select the option below.