Same problem here using ChatGPT. GPT4 timeouts for any request.

Same since few days, and i lost customers !

Same here on GPT3 when requesting complex tasks only.

Same problem here! wee are losing customers

Same, since plugins release.

Also, I have never been able to generate 8k in one go. Not even half of it.

Coding a cronjob now that will try until it succeeds. Plus combining the result to 8k…

Why release plugins and have free access to chatgpt when paying customers can’t use the api?

Same problem. Only ever succeeding when I limit to ~2k tokens. Tried with streaming enabled and still no success.

Pior comigo, paguei mês passado e fiquei 10 dias sem que tivesse uma resposta, tanto do GPT 3.5 quanto do GPT 4, e na véspera de nova cobrança voltou a funcionar, feliz com isso, fui e paguei outro mês, logo dia seguinte nada mais e já se arrasta por 20 dias (com exceção de um único dia que funcionou e nos limites), só dando erro, mando mensagens e mensagens via email e no próprio chat e o erro não é corrigido. Infelizmente nem nos limites tem sido liberado para mim, mesmo sendo PLUS. Penso até em cancelar.

GPT-4 is typically slow, but there are times when I notice it’s even slower than usual. Because of this, I generally use GPT-3.5-turbo.

Same here on 5/20. My customers are complaining about 504 errors for GPT-4. ChatGPT is still running quickly for us though.

Same here. Also switched because of cost. But, at least this week, I noticed a LOT of timeouts with gpt-3.5-turbo: * The model is overloaded, retry or contact help center with request ID if error persists.

1 Like

It’s also frustrating that I seem to be charged for these errored requests. At least they are stored in the history including completions that I never seen. I was doing a bunch of other requests(smaller testing requests) but I’m pretty sure I’ve spent a lot of tokens on these errors.
Does anyone know if we are paying for them for sure?
Also, at the same time playground was working pretty well with gpt-4 as well as the web bersion of chatgpt4

1 minute later update:

Come on OpenAI : ( not again

openai.error.Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600)

I think I’ve cracked it.
It did not error when I used streaming. And both playground and chatGPT4 use streaming.
My speed was around 328 tokens a minute.

Reference on how to use streaming
github openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb

I can barely get a single output with gpt-3.5-turbo because it’s constantly timing out…

Apart from this, anyone found the quality of GPT-4 reply is slightly different?

I mean, by the answer quality, its replies seems not as good as before… especially in terms of “conciousness” - sometimes it just not very clear why user ask. Maybe just my hallucinations

Honestly, I think a bunch of folks are finally beginning to realize the real cost of GPT-4, and quietly switching back to GPT-3.5-turbo.

I encountered the same issue since yesterday, when almost all of the gpt4 workload reached timeouts. Not resolved yet

I’m using gpt-3.5-turbo for quite a while now, and I’ve never seen anything like this:

Operation timed out after 60013 milliseconds with 0 bytes received
Operation timed out after 60000 milliseconds with 0 bytes received
Operation timed out after 60003 milliseconds with 0 bytes received
Operation timed out after 60001 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received

Failed after 10 retries.

I’ve seen a few timeouts, but this is crazy. No response at all with OpenAI team, even more crazy.

Last week I was getting pretty consistent API timeout errors with GPT-3.5-turbo. Tonight, I’ve been trying to use the GPT4 browsing beta and getting consistent network errors.

I guess we are experiencing the growing pains of a super successful company. I think the issue is pretty clear: We ALL want to use this amazing technology, and we are overloading the existing infrastructure. The question is: What will OpenAI do about it?

60 second timeouts? I envy you :slight_smile: — I increased my timeouts to 5 minutes, then to 10 minutes, because at least GPT-4 wouldn’t return anything faster than in 2.5 minutes.

What’s worrying is that it seems there is prioritization going on: I can use ChatGPT Plus just fine and it responds quickly. Also, reading the replies in this forum, the response times that people see seem to vary by region, which suggests that some of us are being treated rather badly (that would be me).

As I write this, I’m trying to get a single request through for the 8th time or so. That request has a 10-minute timeout.

That’s not a functioning API.

Same problem. My app is hosted on Cloudflare pages, and I get a prolonged GPT-4 API response. On long answers, I always encountered a network error. The console indicates these errors are due to HTTP3 QUIC (: ERR_QUIC_PROTOCOL_ERROR).