Why o3-mini on api is so slow?

Does anyone having the same experience with it? I’m getting lots of errors (disconnections or something) and the process is taking like 5 minutes to run…

“Run has been ended with an invalid status “failed”. Sorry, something went wrong.”

“Run has been ended with an invalid status “expired”.”

It has “reasoning” capabilities, as per touted.
I do notice a 7 ~ 10 Sec latency in using o3-mini for my web app implementation.

You can check out Algebraic Equation GPT4 by me.
Query a difficult Math question and test its response.

Everything is slow today - even regular ChatGPT - which at the moment is so slow to respond that I am literally doing other tasks and coming back to it later. I’ve had responses take 5 minutes to generate today. This started last night and is continuing.

Maybe someone figured out how to get ChatGPT to mine Bitcoin? :smiley:

1 Like

What are you querying that is taking 5 mins response time ? Is it a Math question ?

1 Like

No. Debugging a ~150 line python script.

It takes about 8 seconds on average to parse a solution via my web app that calls o3 mini to solve equations.

I reckon that’s because it has reasoning and COT. Compared to 3.5 turbo, it’s takes much much longer.

But 3.5 turbo gets the maths wrong frequently. So in return for the “thinking time”, we get correct Math. That’s a fair compromise.

Of course if latency can be reduce, that will be good.