Gpt5-xxx (any version!) at least 10x slower than gpt4-...?!

I recently changed my gpt-o4-mini to gpt-5-nano and later tested all other gpt5 models, only to find that all of them are at least 10x slower! Honestly anything gpt5 just sucks completely!

I understand in case there’s some internal routing done but some transparency should be in place. It’s (gpt5 that is) communicated as the fastest replacement for the 4 series but that is simply not the truth.

Am I’m the only one finding this. Our customer’s were not excited about the lack of performance. In addition model performance fluctuates a lot - from bad to worth.

3 Likes

Yes, same issue here. I have 15+ developers on gpt-5 (over Claude or Gemini), and now I regret it.

gpt-5 and gpt-5-codex are amazing coding models, but they are so slow compared to others (3x to 5x slower) that they are barely usable. gpt-5-nano is also crazy slow for a smaller model.

All my coding demos now use Sonnet 4.5 or even Gemini flash-latest!!!

Hope @OpenAI_Support are going to fix this. Such a miss opportunity.