Having the same issue here, on average it is taking 6 repeat requests before I can actually get a response…

Beyond this. For larger requests I am seeing 10+minute response time (for a 3000 token request/response). Anyone else seeing dramatic slow down?

Have an app that was working fine up until a couple days ago. Less than 10% of my requests are going through and when they do the responses are extremely slow. The errors I am getting are 502 errors and this one:

Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 60.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=120.0).

Yes, the time needed to build an article with my turbo3-5 setup was around 70-80 seconds, last few days are obviously slower, anywhere between 150-250 seconds.

I am seeing response times of 3-5 minutes, with many requests timing out after 10 minutes (my timeout). Is that expected? If so, I need to rethink the entire approach.

I would double your timeout time during this outage/throttling period. My 3-4 minute requests are returning successful but often take over 10 minutes at the moment

the 3.5turbo very likely not works, use text-davinci-003 can still do the job,

Since yesterday all my GPT-4 api request timeout after 60 seconds. Anyone else experiences this? Until yesterday i could call the api with max tokens 2000 and it could take several minutes without a problem.

Well, I got two apps, one using Python hosting in the UK and another just using Vercel,
and one with Vercel works just fine, but the Python one keeps time out, server error, and error after a few requests. Have I updated to Python available version to 0.27.7.

We are using text-davinci-003 (pay as you go version) with 1 minute delay between 2 requests and it still throws the error 429 { “error”: { “message”: “The server had an error while processing your request. Sorry about that!”, “type”: “server_error”, “param”: null, “code”: null } }

1 Like

I have the same problem. The API is charging for the usage and the output prints the message “The server had an error while processing your request. Sorry about that”

1 Like

If I remember correctly 429 means you are over your current rate limit.

This was last month before I canceled paid services it was happening for months with out any help and they kept increasing my quota rate limit was no where close to being met either it crashed every billing cycle when I made the payment for the previous month

ChatGPT Plus is a separate system to the OpenAI Platform for API based accounts, you should set up a payment method there.

Hah! My last e-mail was sent months ago, still no reply. Very bad customer support.

Thanks I know I have both and I have been a paid customer for a while. Payments are set up correctly have automatuc paying set up and going over the $18 mark the previous months but the Api in my developer account stops working when ever the automatic payment is made. No one was helping me on this so tried switching to my husbands name now still no response.

Well the $18 is a one time grant, once you have used it that is it, you then need to pay for the tokens used, if you have a payment system in place it will not stop unless you set a manual hard limit

1 Like

I wish they had a live chat right now as soon as you respond you have to start the request to talk to someone again

1 Like

I had a hard limit set I no longer had the credit cause I was a paid customer but usage was automatically stopping when it reached $18 so there is some sort of bug with my account. They just kept increasing my hard limit instead of fixing the other issue. I even changed my Api after I switched over to paid account. Was a paid customer and paying mind you over $18 for months but now I have my app published so stopped my paid account in June and switched over to a new account under my husbands name. I dont want to run into the same issue so would really like some reassurance that they will increase my hard limit and I will only reach my quota once the hard limit is reached

1 Like

I have gotten the exact same error. I found there is some issue in my API request body (the function calling spec specified an array for a certain property, but its default value was set as an empty string). It should have been 400 error, but it was 500 for some reason.

1 Like