besides the speed issue when requests are completed with status 200
we also get a lot of these:
Error: {
“error”: {
“message”: “The server had an error while processing your request. Sorry about that!”,
“type”: “server_error”,
“param”: null,
“code”: null
}
}
So far, the OpenAI team has not responded… we are all experiencing the same thing, and waiting on a fix. The “The server had an error while processing your request. Sorry about that!” server_error makes it impossible to use davinci-003 at the moment.
Yes, things are going slow as well, but the main issue is the server_error, which is completely breaking every few requests. I’m assuming both are caused by the same issue.
I ahve the same issue since this morning, it started a little on Thrusday then I added a delay of 4sec between each call but since this morning I always get the error : 500 {‘error’: {‘message’: ‘Internal server error’, ‘type’: ‘auth_subrequest_error’, ‘param’: None, ‘code’: ‘internal_error’}}
API Requests, to Davinci Text 3, from the UK: On 5th February 2023, the API was returning responses within seconds. Today, responses are closer to 1 minute. This is a disaster for a planned demo that will now need to be rescheduled. So, the issue is not resolved for me.
Just got some Testers for my MVP and the API slowed down to a point where it times out constantly. What an unlucky timing. Until yesterday it was working just fine through extensive testing.
It’s horrible. Getting random 429 errors and it takes a very long time to generate even the simplest answers (1 minute seems like the new normal). I’m on a paid account. I have these issues for over a week now; before it was much faster and I rarely got these weird 429 errors.
I use AWS east coast servers. When I was testing from the cmd line 3-4 weeks back, I used to see frequent 429s. But now I rarely do. My app users are in Asia & USA. So, we cover the entire 24h span.
Also, wrt latencies, I routinely see 3-10secs. Which I thought was terrible:)
I can confirm this is the same here in Germany. It got a bit better today, and the response time often get back to normal in the evening time (central European time); but the response time is much worse than what it used to be. (davinci-3, completion api)
it’s not a matter of code,
the speed has improved drastically now, but the quality is down, we are seing a lot more “junk” or hallucinations returned now for the same prompts, I guess they are balancing between quality of output and performance