Same here… guys we’re paying for this service. this is frustrating.
1 Like
I’m experiencing the same thing. It’s been hours since I can’t input a simple query in GPT-4.
Same here GPT4 is not working for a few hours.
Same problem here using ChatGPT. GPT4 timeouts for any request.
Same since few days, and i lost customers !
Same here on GPT3 when requesting complex tasks only.
Same problem here! wee are losing customers
Per
13
Same, since plugins release.
Also, I have never been able to generate 8k in one go. Not even half of it.
Coding a cronjob now that will try until it succeeds. Plus combining the result to 8k…
Why release plugins and have free access to chatgpt when paying customers can’t use the api?
Same problem. Only ever succeeding when I limit to ~2k tokens. Tried with streaming enabled and still no success.
Pior comigo, paguei mês passado e fiquei 10 dias sem que tivesse uma resposta, tanto do GPT 3.5 quanto do GPT 4, e na véspera de nova cobrança voltou a funcionar, feliz com isso, fui e paguei outro mês, logo dia seguinte nada mais e já se arrasta por 20 dias (com exceção de um único dia que funcionou e nos limites), só dando erro, mando mensagens e mensagens via email e no próprio chat e o erro não é corrigido. Infelizmente nem nos limites tem sido liberado para mim, mesmo sendo PLUS. Penso até em cancelar.
GPT-4 is typically slow, but there are times when I notice it’s even slower than usual. Because of this, I generally use GPT-3.5-turbo.
jims
17
Same here on 5/20. My customers are complaining about 504 errors for GPT-4. ChatGPT is still running quickly for us though.
Same here. Also switched because of cost. But, at least this week, I noticed a LOT of timeouts with gpt-3.5-turbo: * The model is overloaded, retry or contact help center with request ID if error persists.
1 Like
It’s also frustrating that I seem to be charged for these errored requests. At least they are stored in the history including completions that I never seen. I was doing a bunch of other requests(smaller testing requests) but I’m pretty sure I’ve spent a lot of tokens on these errors.
Does anyone know if we are paying for them for sure?
Also, at the same time playground was working pretty well with gpt-4 as well as the web bersion of chatgpt4
1 minute later update:
Come on OpenAI : ( not again
openai.error.Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600)
I think I’ve cracked it.
It did not error when I used streaming. And both playground and chatGPT4 use streaming.
My speed was around 328 tokens a minute.
Reference on how to use streaming
github openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb
I can barely get a single output with gpt-3.5-turbo because it’s constantly timing out…
Apart from this, anyone found the quality of GPT-4 reply is slightly different?
I mean, by the answer quality, its replies seems not as good as before… especially in terms of “conciousness” - sometimes it just not very clear why user ask. Maybe just my hallucinations
Honestly, I think a bunch of folks are finally beginning to realize the real cost of GPT-4, and quietly switching back to GPT-3.5-turbo.
I encountered the same issue since yesterday, when almost all of the gpt4 workload reached timeouts. Not resolved yet
I’m using gpt-3.5-turbo for quite a while now, and I’ve never seen anything like this:
Operation timed out after 60013 milliseconds with 0 bytes received
Operation timed out after 60000 milliseconds with 0 bytes received
Operation timed out after 60003 milliseconds with 0 bytes received
Operation timed out after 60001 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Operation timed out after 60006 milliseconds with 0 bytes received
Failed after 10 retries.
I’ve seen a few timeouts, but this is crazy. No response at all with OpenAI team, even more crazy.