leee
47
happening again… The server had an error while processing your request. Sorry about that!
I am adding retries but this would only increase the server load and make things worse…
Same here, I have my server create an API endpoint using turbo3.5, sometime will works return a valid response but most of the time today return
The server had an error while processing your request. Sorry about that!
Same here, using turbo3.5, random The server had an error while processing your request. Sorry about that! errors
lraus
50
It’s not great , especially as it’s a paid product but need to remember that it’s in beta and does not have a SLA.
Having the same issue here, on average it is taking 6 repeat requests before I can actually get a response…
Beyond this. For larger requests I am seeing 10+minute response time (for a 3000 token request/response). Anyone else seeing dramatic slow down?
Have an app that was working fine up until a couple days ago. Less than 10% of my requests are going through and when they do the responses are extremely slow. The errors I am getting are 502 errors and this one:
Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 60.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=120.0).
Yes, the time needed to build an article with my turbo3-5 setup was around 70-80 seconds, last few days are obviously slower, anywhere between 150-250 seconds.
jwr
55
I am seeing response times of 3-5 minutes, with many requests timing out after 10 minutes (my timeout). Is that expected? If so, I need to rethink the entire approach.
I would double your timeout time during this outage/throttling period. My 3-4 minute requests are returning successful but often take over 10 minutes at the moment
the 3.5turbo very likely not works, use text-davinci-003 can still do the job,
reneh
58
Since yesterday all my GPT-4 api request timeout after 60 seconds. Anyone else experiences this? Until yesterday i could call the api with max tokens 2000 and it could take several minutes without a problem.
Well, I got two apps, one using Python hosting in the UK and another just using Vercel,
and one with Vercel works just fine, but the Python one keeps time out, server error, and error after a few requests. Have I updated to Python available version to 0.27.7.
We are using text-davinci-003 (pay as you go version) with 1 minute delay between 2 requests and it still throws the error 429 { “error”: { “message”: “The server had an error while processing your request. Sorry about that!”, “type”: “server_error”, “param”: null, “code”: null } }
1 Like
I have the same problem. The API is charging for the usage and the output prints the message “The server had an error while processing your request. Sorry about that”
1 Like
If I remember correctly 429 means you are over your current rate limit.
This was last month before I canceled paid services it was happening for months with out any help and they kept increasing my quota rate limit was no where close to being met either it crashed every billing cycle when I made the payment for the previous month
ChatGPT Plus is a separate system to the OpenAI Platform for API based accounts, you should set up a payment method there.
Hah! My last e-mail was sent months ago, still no reply. Very bad customer support.
Thanks I know I have both and I have been a paid customer for a while. Payments are set up correctly have automatuc paying set up and going over the $18 mark the previous months but the Api in my developer account stops working when ever the automatic payment is made. No one was helping me on this so tried switching to my husbands name now still no response.