Severe regression in GPT-5 Codex performance

I need to raise a critical issue with GPT-5 Codex.

Since the update, coding tasks that GPT-4.1 (and even 4o) handled smoothly are now 4–7 times slower with GPT-5 Codex.

This isn’t about “deeper reasoning” — it’s basic coding workflows that are now painfully delayed, breaking developer productivity.

Key problems:

  • Severe slowdown compared to GPT-4.1 (minutes instead of seconds).

  • No option to select the old models (4.1, 4o) that worked much better for fast coding.

  • Flow disruption: it’s impossible to keep a fast development pace when the model “thinks” this long.

  • Competitors (Claude Code, DeepSeek, etc.) are noticeably faster right now.

For many of us, this is not a minor inconvenience but a blocking issue. Please bring back the ability to choose GPT-4.1 / 4o, or urgently fix the latency in Codex-GPT-5.

24 Likes

Yes, not yet able to check how much better it is Vs the former GPT5 version, I am sure it is better, but it is slower…
But… globally codex is just AWESOME, crazy efficient, a whole world is changing!

1 Like

I agree sever slowdown!!! Please solve

Same problem here. GPT5 Codex is very very slow. Unusable. Simple tasks that it used to complete in seconds now take up to 20 minutes. It’s truly unusable.

6 Likes

I wrote this same message here, and the bot told me it was off-topic and I couldn’t publish it. Same feeling; before, I’d get commits done in 2-3 minutes; if something was heavy, it would take 5 minutes. Now, on average, it takes 20 minutes; some have taken 30 minutes.
And, unfortunately, there’s no option to go back to the old model.
If this doesn’t work out, I’ll look for alternatives outside of ChatGPT.

3 Likes

Today, we were supposed to present the different AI choices for our dev team to our management (a company in the pharmaceutical sector). Yesterday, OpenAI released an update that changed our demonstration scenario from steps lasting 2 minutes each to steps of 20 minutes.
I think management was convinced… but not in the right way.

4 Likes

Many times appears this error. Too many times!!!

2 Likes

I also had a call to showcase the power of Codex. Expectations unmet. A real debacle.

I have this problem too. The work is extremely slow and gives errors.

1 Like

Just a heads up:
There will be an AMA with the Codex Team on Reddit tomorrow. Likely a good moment to voice your concerns.

1 Like

100% agree, the slowdown is bad enough to disrupt workflow. I’d normally run 3-4 tasks in parallel and by the time I’d finish prompting the last one, the first one would usually be done. I’ve been staring at many (small) tasks running for over 20 minutes today and some longer tasks running for 60+ minutes getting cut off due to “Failed to sample tokens”. Current performance makes it very hard to do multiple feedback iterations and I’m reverting to manual coding in a lot of cases.

Code quality is notably higher but the trade off is not worth it

2 Likes

The slow is bad, I agree with that, but the most frustrating is the “Failed to sample tokens”. If it’s slow but it’s finished with quality, is somewhat okay, but when it spend 40, 50 mins just to fail, oh man, that is bad, we are literally losing time instead of being productive

1 Like

Today everything working well.

Thank you for fixing!

1 Like

Not in my case. Simple modification still 17 minutes, too much time.

1 Like

Find this post on Twitter with the explanation for the current situation:

I think that solves it and we just need to wait.

My idea for a workaround right now would be to use Codex CLI and set the model parameter or revert to an earlier version.

Agreed. It is taking a long time. -which model are you running?

I don’t know if the model has regressed or not, but I’ve noticed that it seems to be adding additional complexity. Previous models would just update the files that were already there, this model seems very keen on completely rewriting files in my project, even files it just created. Rebuilding classes completely also seems to be common.

I’m not a dev per se, this is more of a personal project to see what AI coding is capable of, but it seems like this newer model is overcomplicating things that the previous model did just fine and faster.

Now I’ve once again hit my rate limit. I’ve been using codex for a little over a week and it’s only today and yesterday that I’ve started hitting my rate limit. Model is frequently having issues with simple indentation in Python, things that I can then go back and correct in a couple seconds.

I don’t want to be negative, but this seems like a degradation to me.

I signed in today with high hopes after severe CC degradation of quality. I’m new to Codex, so perhaps it’s my fault that I don’t know how to use it properly, but compared to any of the agents in VSCode, it’s excruciatingly slow so as to make it practically unusable. What’s even more concerning is that after making some refactoring after the CC and GPT-5 (from Copilot), it introduced several errors that it is now incapable of correcting. It’s not about the 23€ but about the hype and expectations vs. reality. I really wonder what the real story is behind this - whether it’s my lack of understanding or Codex not working as advertised.

It’s a good time for same to know, anytime he advertises a new product on his twitter it’s going to get surged.

it’s really good to see tho, people don’t rush to go try the new grok tools :slight_smile:

The credit where the credit is due: today, Codex is unrecognizably better (8 AM CET) in both speed and the quality of its work. Like the completely different “personality”.