GPT4 API very very slow : reaching timeout

Apart from this, anyone found the quality of GPT-4 reply is slightly different?

I mean, by the answer quality, its replies seems not as good as before… especially in terms of “conciousness” - sometimes it just not very clear why user ask. Maybe just my hallucinations

Honestly, I think a bunch of folks are finally beginning to realize the real cost of GPT-4, and quietly switching back to GPT-3.5-turbo.

I encountered the same issue since yesterday, when almost all of the gpt4 workload reached timeouts. Not resolved yet

I’m using gpt-3.5-turbo for quite a while now, and I’ve never seen anything like this:

Operation timed out after 60013 milliseconds with 0 bytes received
Operation timed out after 60000 milliseconds with 0 bytes received
Operation timed out after 60003 milliseconds with 0 bytes received
Operation timed out after 60001 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received

Failed after 10 retries.

I’ve seen a few timeouts, but this is crazy. No response at all with OpenAI team, even more crazy.

Last week I was getting pretty consistent API timeout errors with GPT-3.5-turbo. Tonight, I’ve been trying to use the GPT4 browsing beta and getting consistent network errors.

I guess we are experiencing the growing pains of a super successful company. I think the issue is pretty clear: We ALL want to use this amazing technology, and we are overloading the existing infrastructure. The question is: What will OpenAI do about it?

60 second timeouts? I envy you :slight_smile: — I increased my timeouts to 5 minutes, then to 10 minutes, because at least GPT-4 wouldn’t return anything faster than in 2.5 minutes.

What’s worrying is that it seems there is prioritization going on: I can use ChatGPT Plus just fine and it responds quickly. Also, reading the replies in this forum, the response times that people see seem to vary by region, which suggests that some of us are being treated rather badly (that would be me).

As I write this, I’m trying to get a single request through for the 8th time or so. That request has a 10-minute timeout.

That’s not a functioning API.

Same problem. My app is hosted on Cloudflare pages, and I get a prolonged GPT-4 API response. On long answers, I always encountered a network error. The console indicates these errors are due to HTTP3 QUIC (: ERR_QUIC_PROTOCOL_ERROR).

Getting the same issues only using 3.5-turbo

1 Like

I have been experiencing this as well for the past 6 weeks or so.
Error communicating with OpenAI: (‘Connection aborted.’, RemoteDisconnected(‘Remote end closed connection without response’))
I am billed for all of these. They mostly have over 4K tokens including response, and I have reached close to 8K. I can see the token counts in the billing area!
Sometimes (it used to be at least once a day and usually more) there are times when my system runs 7 - 10 prompts of this size successfully and completes its task (my system sends multiple prompts like this, and I have an exponential backoff retry mechanism).
This is incredibly frustrating. It’s an amazing tool, but it’s hard to make progress like this, and it’s very disconcerting that OpenAI has not answered any of my inquiries about this - zero response from the Help Center.
If they would just extend the timeout seemingly it would all work fine - since clearly (according to the billing) the requests themselves run to the end and are taking the resources anyway. So why not let the paying customers get the results!!

I am still regularly getting timeouts. Has anybody found a solution – or at least a cause?

Just happened to me on GPT-4-0125-preview API:

html> <head><title>502 Bad Gateway</title></head> <body> <center><h1>502 Bad Gateway</h1></center> <hr><center>cloudflare</center> </body> </html>