API calls to davinci text 3 very slow and random speeds for identical prompts

Hello,
experiencing very high latency for identical prompts hitting davinci 3, our servers are located in France
time goes from 10s to 1 minute / request, tested with identical settings and api options, I just don’t understand what we should be doing going forward to make this sustainable for production in terms of speed.
the prompts have been optimized and only consume 3-5 s when entered with identical options in the playground interface.
we have very high bandwidth on these servers and already uses other apis which endpoints are in the US without such delays.

5 Likes

besides the speed issue when requests are completed with status 200

we also get a lot of these:
Error: {
“error”: {
“message”: “The server had an error while processing your request. Sorry about that!”,
“type”: “server_error”,
“param”: null,
“code”: null
}
}

This is discussed in this thread:

So far, the OpenAI team has not responded… we are all experiencing the same thing, and waiting on a fix. The “The server had an error while processing your request. Sorry about that!” server_error makes it impossible to use davinci-003 at the moment.

are you having speed issues also when requests are successful?

1 Like

Yes, things are going slow as well, but the main issue is the server_error, which is completely breaking every few requests. I’m assuming both are caused by the same issue.

Same problem, 30-55 sec for a request or got an error

Any answers about this? Getting up to a minute per request via the API. This is for around 500 to 1000 tokens total.

They say it’s resolved, but it’s clearly getting worse and worse.

1 Like

I ahve the same issue since this morning, it started a little on Thrusday then I added a delay of 4sec between each call but since this morning I always get the error : 500 {‘error’: {‘message’: ‘Internal server error’, ‘type’: ‘auth_subrequest_error’, ‘param’: None, ‘code’: ‘internal_error’}}

what can I do to fix this ?

2 Likes

Openai have just posted that it has been investigated and resolved in playground and the api

API Requests, to Davinci Text 3, from the UK: On 5th February 2023, the API was returning responses within seconds. Today, responses are closer to 1 minute. This is a disaster for a planned demo that will now need to be rescheduled. So, the issue is not resolved for me.

Same. It is not good for showing of prototypes. Everyone will be looking at their watches…

Just got some Testers for my MVP and the API slowed down to a point where it times out constantly. What an unlucky timing. Until yesterday it was working just fine through extensive testing.

It’s horrible. Getting random 429 errors and it takes a very long time to generate even the simplest answers (1 minute seems like the new normal). I’m on a paid account. I have these issues for over a week now; before it was much faster and I rarely got these weird 429 errors.

I use AWS east coast servers. When I was testing from the cmd line 3-4 weeks back, I used to see frequent 429s. But now I rarely do. My app users are in Asia & USA. So, we cover the entire 24h span.

Also, wrt latencies, I routinely see 3-10secs. Which I thought was terrible:)

I can confirm this is the same here in Germany. It got a bit better today, and the response time often get back to normal in the evening time (central European time); but the response time is much worse than what it used to be. (davinci-3, completion api)

The slow API is still an issue when trying from California.

Can you show your code, please? We might be able to help.

it’s not a matter of code,
the speed has improved drastically now, but the quality is down, we are seing a lot more “junk” or hallucinations returned now for the same prompts, I guess they are balancing between quality of output and performance

For me the latency has gotten worse. It takes about 13 seconds to get a response from davinci. What is the issue?