VSC Codex extension not working

Hi everyone,

for a couple of weeks codex no longer runs as it did before. I dont konw what might have changed in the meantime. It always worked flawlessly. It typically goes like this: i start a task like “read script xyz and summarize its contents”. The model (no matter which) then goes into thinking mode and either gets stuck there or in the next step when it tries some simple tool to read the script. It would stay in this state for minutes and hours. I have deleted and reinstalled the extension including troubleshoots like deleting the .codex folder, the auth.json etc.

any other ideas?

Edit:

  • I have also checked whether it might be due to syncing issues with onedrive but the issue occurs equally for local files.

Thank you!

Hi!

I’m not able to reproduce this, it seems to be working fine here.

Can you try on a different machine on a different network connection? also Have you made sure your ChatGPT account is linked and you are not currently usage rate limited.

Hey! Yes, same for me. I cannot reproduce it on my private laptop either. It works just fine there. It has to do with the local setup but I have no clue what it could be.
The account is linked. I even reauthenticated after deleting auth.json. Usage is definitely available since I was not able to use it. 100% left.

Its up to date - that is not the issue. Since the issue exists there have also been various updates.

What might be happening are firewall/antivirus issues given that it is a company laptop. I have found logs that indicate that powershell was blocked. however, only twice. so i am not sure that this is the cause for all the times its not working.

further, codex does not appear to be fully stuck but running into certain issues like access violoation, super long reads for small files etc.

i have run a test where i have it read a file that contains only a small comment. it took four minutes to achieve the task. i will append the log, maybe somebody can see something?

`{“timestamp”:“2026-01-07T11:06:49.822Z”,“type”:“session_meta”,“payload”:{“id”:“…”,“timestamp”:“2026-01-07T11:06:44.805Z”,“cwd”:“c:\dev\codex-debug”,“originator”:“codex_vscode”,“cli_version”:“0.77.0”,“instructions”:“## Skills\nThese skills are discovered at startup from multiple local sources. Each entry includes a name, description, and file path so you can open the source for full instructions.\n- skill-creator: Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Codex’s capabilities with specialized knowledge, workflows, or tool integrations. (file: C:/Users/…/.codex/skills/.system/skill-creator/SKILL.md)\n- skill-installer: Install Codex skills into $CODEX_HOME/skills from a curated list or a GitHub repo path. Use when a user asks to list installable skills, install a curated skill, or install a skill from another repo (including private repos). (file: C:/Users/…/.codex/skills/.system/skill-installer/SKILL.md)\n- Discovery: Available skills are listed in project docs and may also appear in a runtime “## Skills” section (name + description + file path). These are the sources of truth; skill bodies live on disk at the listed paths.\n- Trigger rules: If the user names a skill (with $SkillName or plain text) OR the task clearly matches a skill’s description, you must use that skill for that turn. Multiple mentions mean use them all. Do not carry skills across turns unless re-mentioned.\n- Missing/blocked: If a named skill isn’t in the list or the path can’t be read, say so briefly and continue with the best fallback.\n- How to use a skill (progressive disclosure):\n  1) After deciding to use a skill, open its SKILL.md. Read only enough to follow the workflow.\n  2) If SKILL.md points to extra folders such as references/, load only the specific files needed for the request; don’t bulk-load everything.\n  3) If scripts/ exist, prefer running or patching them instead of retyping large code blocks.\n  4) If assets/ or templates exist, reuse them instead of recreating from scratch.\n- Description as trigger: The YAML description in SKILL.md is the primary trigger signal; rely on it to decide applicability. If unsure, ask a brief clarification before proceeding.\n- Coordination and sequencing:\n  - If multiple skills apply, choose the minimal set that covers the request and state the order you’ll use them.\n  - Announce which skill(s) you’re using and why (one short line). If you skip an obvious skill, say why.\n- Context hygiene:\n  - Keep context small: summarize long sections instead of pasting them; only load extra files when needed.\n  - Avoid deeply nested references; prefer one-hop files explicitly linked from SKILL.md.\n  - When variants exist (frameworks, providers, domains), pick only the relevant reference file(s) and note that choice.\n- Safety and fallback: If a skill can’t be applied cleanly (missing files, unclear instructions), state the issue, pick the next-best approach, and continue.”,“source”:“vscode”,“model_provider”:“openai”}}`
`{“timestamp”:“2026-01-07T11:06:49.823Z”,“type”:“response_item”,“payload”:{“type”:“message”,“role”:“user”,“content”:[{“type”:“input_text”,“text”:“# AGENTS.md instructions for c:\dev\codex-debug\n\n\n## Skills\nThese skills are discovered at startup from multiple local sources. Each entry includes a name, description, and file path so you can open the source for full instructions.\n- skill-creator: Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Codex’s capabilities with specialized knowledge, workflows, or tool integrations. (file: C:/Users/…/.codex/skills/.system/skill-creator/SKILL.md)\n- skill-installer: Install Codex skills into $CODEX_HOME/skills from a curated list or a GitHub repo path. Use when a user asks to list installable skills, install a curated skill, or install a skill from another repo (including private repos). (file: C:/Users/…/.codex/skills/.system/skill-installer/SKILL.md)\n- Discovery: Available skills are listed in project docs and may also appear in a runtime “## Skills” section (name + description + file path). These are the sources of truth; skill bodies live on disk at the listed paths.\n- Trigger rules: If the user names a skill (with $SkillName or plain text) OR the task clearly matches a skill’s description, you must use that skill for that turn. Multiple mentions mean use them all. Do not carry skills across turns unless re-mentioned.\n- Missing/blocked: If a named skill isn’t in the list or the path can’t be read, say so briefly and continue with the best fallback.\n- How to use a skill (progressive disclosure):\n  1) After deciding to use a skill, open its SKILL.md. Read only enough to follow the workflow.\n  2) If SKILL.md points to extra folders such as references/, load only the specific files needed for the request; don’t bulk-load everything.\n  3) If scripts/ exist, prefer running or patching them instead of retyping large code blocks.\n  4) If assets/ or templates exist, reuse them instead of recreating from scratch.\n- Description as trigger: The YAML description in SKILL.md is the primary trigger signal; rely on it to decide applicability. If unsure, ask a brief clarification before proceeding.\n- Coordination and sequencing:\n  - If multiple skills apply, choose the minimal set that covers the request and state the order you’ll use them.\n  - Announce which skill(s) you’re using and why (one short line). If you skip an obvious skill, say why.\n- Context hygiene:\n  - Keep context small: summarize long sections instead of pasting them; only load extra files when needed.\n  - Avoid deeply nested references; prefer one-hop files explicitly linked from SKILL.md.\n  - When variants exist (frameworks, providers, domains), pick only the relevant reference file(s) and note that choice.\n- Safety and fallback: If a skill can’t be applied cleanly (missing files, unclear instructions), state the issue, pick the next-best approach, and continue.\n”}]}}`
`{“timestamp”:“2026-01-07T11:06:49.823Z”,“type”:“response_item”,“payload”:{“type”:“message”,“role”:“user”,“content”:[{“type”:“input_text”,“text”:“<environment_context>\n  c:\dev\codex-debug\n  <approval_policy>on-request</approval_policy>\n  <sandbox_mode>read-only</sandbox_mode>\n  <network_access>restricted</network_access>\n  powershell\n</environment_context>”}]}}`
`{“timestamp”:“2026-01-07T11:06:49.970Z”,“type”:“response_item”,“payload”:{“type”:“message”,“role”:“user”,“content”:[{“type”:“input_text”,“text”:“<environment_context>\n  <sandbox_mode>workspace-write</sandbox_mode>\n  <network_access>restricted</network_access>\n  <writable_roots>\n    c:\dev\codex-debug\n  </writable_roots>\n  powershell\n</environment_context>”}]}}`
`{“timestamp”:“2026-01-07T11:06:49.971Z”,“type”:“response_item”,“payload”:{“type”:“message”,“role”:“user”,“content”:[{“type”:“input_text”,“text”:“# Context from my IDE setup:\n\n## Active file: debug.py\n\n## Open tabs:\n- debug.py: debug.py\n- debug_tools.ps1: debug_tools.ps1\n\n## My request for Codex:\nread C:\dev\codex-debug\debug.py and summarize its content\n”}]}}`
`{“timestamp”:“2026-01-07T11:06:49.971Z”,“type”:“event_msg”,“payload”:{“type”:“user_message”,“message”:“# Context from my IDE setup:\n\n## Active file: debug.py\n\n## Open tabs:\n- debug.py: debug.py\n- debug_tools.ps1: debug_tools.ps1\n\n## My request for Codex:\nread C:\dev\codex-debug\debug.py and summarize its content\n”,“images”:`[ ]`}}`
`{“timestamp”:“2026-01-07T11:06:49.971Z”,“type”:“turn_context”,“payload”:{“cwd”:“c:\dev\codex-debug”,“approval_policy”:“on-request”,“sandbox_policy”:{“type”:“workspace-write”,“writable_roots”:[“c:\dev\codex-debug”],“network_access”:false,“exclude_tmpdir_env_var”:false,“exclude_slash_tmp”:false},“model”:“gpt-5.2”,“effort”:“medium”,“summary”:“auto”}}`
`{“timestamp”:“2026-01-07T11:06:53.176Z”,“type”:“event_msg”,“payload”:{“type”:“token_count”,“info”:null,“rate_limits”:{“primary”:{“used_percent”:3.0,“window_minutes”:300,“resets_at”:1767791403},“secondary”:{“used_percent”:1.0,“window_minutes”:10080,“resets_at”:1768378203},“credits”:{“has_credits”:false,“unlimited”:false,“balance”:null},“plan_type”:null}}}`
`{“timestamp”:“2026-01-07T11:06:58.119Z”,“type”:“event_msg”,“payload”:{“type”:“agent_reasoning”,“text”:“`**`Preparing to read a file`**`\n\nI want to use a tool to read the file, and there’s no need for extra planning here. I’ll run a shell command using either Get-Content or cat, depending on what’s appropriate. I’ll make sure to use the functions.shell_command when executing this. That should get me the file contents in a straightforward way. Let’s do this and see what information we can pull from it!”}}`
`{“timestamp”:“2026-01-07T11:06:58.119Z”,“type”:“response_item”,“payload”:{“type”:“reasoning”,“summary”:[{“type”:“summary_text”,“text”:“`**`Preparing to read a file`**`\n\nI want to use a tool to read the file, and there’s no need for extra planning here. I’ll run a shell command using either Get-Content or cat, depending on what’s appropriate. I’ll make sure to use the functions.shell_command when executing this. That should get me the file contents in a straightforward way. Let’s do this and see what information we can pull from it!”}],“content”:null}}`
`{“timestamp”:“2026-01-07T11:06:58.239Z”,“type”:“response_item”,“payload”:{“type”:“function_call”,“name”:“shell_command”,“arguments”:“{“command”:“Get-Content -Path ‘C:\\dev\\codex-debug\\debug.py’”}”,“call_id”:“…”}}`
`{“timestamp”:“2026-01-07T11:06:58.241Z”,“type”:“event_msg”,“payload”:{“type”:“token_count”,“info”:{“total_token_usage”:{“input_tokens”:7262,“cached_input_tokens”:6272,“output_tokens”:70,“reasoning_output_tokens”:34,“total_tokens”:7332},“last_token_usage”:{“input_tokens”:7262,“cached_input_tokens”:6272,“output_tokens”:70,“reasoning_output_tokens”:34,“total_tokens”:7332},“model_context_window”:258400},“rate_limits”:{“primary”:{“used_percent”:3.0,“window_minutes”:300,“resets_at”:1767791403},“secondary”:{“used_percent”:1.0,“window_minutes”:10080,“resets_at”:1768378203},“credits”:{“has_credits”:false,“unlimited”:false,“balance”:null},“plan_type”:null}}}`
`{“timestamp”:“2026-01-07T11:11:05.984Z”,“type”:“response_item”,“payload”:{“type”:“function_call_output”,“call_id”:“…”,“output”:“Exit code: 0\nWall time: 0.3 seconds\nOutput:\n#hey codex - can you read this?\r\n”}}`
`{“timestamp”:“2026-01-07T11:11:05.984Z”,“type”:“turn_context”,“payload”:{“cwd”:“c:\dev\codex-debug”,“approval_policy”:“on-request”,“sandbox_policy”:{“type”:“workspace-write”,“writable_roots”:[“c:\dev\codex-debug”],“network_access”:false,“exclude_tmpdir_env_var”:false,“exclude_slash_tmp”:false},“model”:“gpt-5.2”,“effort”:“medium”,“summary”:“auto”}}`
`{“timestamp”:“2026-01-07T11:11:06.852Z”,“type”:“event_msg”,“payload”:{“type”:“token_count”,“info”:{“total_token_usage”:{“input_tokens”:7262,“cached_input_tokens”:6272,“output_tokens”:70,“reasoning_output_tokens”:34,“total_tokens”:7332},“last_token_usage”:{“input_tokens”:7262,“cached_input_tokens”:6272,“output_tokens”:70,“reasoning_output_tokens”:34,“total_tokens”:7332},“model_context_window”:258400},“rate_limits”:{“primary”:{“used_percent”:3.0,“window_minutes”:300,“resets_at”:1767791403},“secondary”:{“used_percent”:1.0,“window_minutes”:10080,“resets_at”:1768378203},“credits”:{“has_credits”:false,“unlimited”:false,“balance”:null},“plan_type”:null}}}`
`{“timestamp”:“2026-01-07T11:11:08.310Z”,“type”:“event_msg”,“payload”:{“type”:“agent_message”,“message”:“C:\\dev\\codex-debug\\debug.py contains a single comment line: #hey codex - can you read this?”}}`
`{“timestamp”:“2026-01-07T11:11:08.311Z”,“type”:“response_item”,“payload”:{“type”:“message”,“role”:“assistant”,“content”:[{“type”:“output_text”,“text”:“C:\\dev\\codex-debug\\debug.py contains a single comment line: #hey codex - can you read this?”}]}}`
`{“timestamp”:“2026-01-07T11:11:08.775Z”,“type”:“event_msg”,“payload”:{“type”:“token_count”,“info”:{“total_token_usage”:{“input_tokens”:14629,“cached_input_tokens”:13440,“output_tokens”:104,“reasoning_output_tokens”:34,“total_tokens”:14733},“last_token_usage”:{“input_tokens”:7367,“cached_input_tokens”:7168,“output_tokens”:34,“reasoning_output_tokens”:0,“total_tokens”:7401},“model_context_window”:258400},“rate_limits”:{“primary”:{“used_percent”:3.0,“window_minutes”:300,“resets_at”:1767791403},“secondary”:{“used_percent”:1.0,“window_minutes”:10080,“resets_at”:1768378203},“credits”:{“has_credits”:false,“unlimited”:false,“balance”:null},“plan_type”:null}}}`

I confirm, the extension does not work. I tried it with two accounts and tested it on both VSCode and Cursor.
What can be noted in both cases is that the extension does not connect to OpenAI; in fact, it does not even show usage percentages.

We have two very different issues then because as described above mine very much does connect to OpenAI, it does show usage percentage and it does work - just extremely slowly for even the easiest tasks.

Anybody with any ideas maybe?

Update: I deleted all settings, and uninstalled VSC. New installation did not fix the issue.

You are using one of the Codex models though and not GPT 5.2? the normal version is very slow compared to the codex ones (in vscode, WSL at least)

I mean my main point here is that I have two laptops next to each other where one is significantly, and i mean truly significantly, slower than the other no matter which model i am using. I have found out that WSL is having issues so I am working on resolving this. Could this have been the cause? The sandbox of VSC being super slow because WSL is not working properly?

Are they set up identical enough to be used to do a systematic comparison of things such as configuration, log files, etc. ?

I liked the details noted earlier, but at present just starting to use OpenAI Codex on Windows 11 with the VS Code extension to know what to look at, much less comprehend the details.

Willing to spend a few hours on this in-between other things today, think more of someone you can bounce ideas off than an expert but can try out things.

My setup,

  • Windows 11
  • ChatGPT Pro plan
  • VSC installed
  • VSC not running on WSL
  • AFAIK VSC is not calling or using WSL, use Git Bash when Bash needed
  • OpenAI Codex VSC extension installed
  • No proxy to access internet

I have no apparent problems with my setup, but then again some logs report problems that seem to have no affect for what I am doing.


How did you get the log noted earlier?

The first logs I check when using VSC with LLM extensions is from a VSC terminal, then selecting “OUTPUT” and then selecting the appropriate log, in this case “Codex”, e.g.

image

My IT guy took a look and it appears that the antivirus is blocking it somehow. Has anybody experienced a similar issue? When turning the antivirus off it runs smoothly and fast. With it on it takes forever, but does not fail!

1 Like

OpenAI Codex has a GitHub repository with issues and discussions.

Many may miss the discussions because not all GitHub OpenAI repos have discussions activated.

HTH

You mean I should be opening an issue on GitHub?

1 Like

Yes, I think they would appreciate it.

When I checked I did not see any that were like yours, e.g. Windows, Antivirus.

The only one I found would not start, your session was just slow.