Gpt4 api call is very slow, or broken

Anyone else seeing this? I am just processing 2000 tokens and -4 or -3.5 turbo isnt working. text-davinci-003 is though…

A recent change mentioned on the Discord channel is to drop/refuse connections when the servers get overloaded instead of connecting and failing. (The change may, or may not, have been deployed)

In the past, the system would connect, and then it fail to finish the completion. Essentially it would timeout. Often it would give you the dreaded 429 error when it was not a user based rate limit error.

If what I am reading is correct, we need to capture and handle a connection that is refused. I am not sure of the error code. I assume it will be a 404

We should probably be handling that type of error in our code already - but it will be a change in behaviors of the API that is worth noting.

The post reasoned that

“the end point was not going to give a result - and it knew this in advance - so why bother making a connection in the first place. It is destined to fail anyway” (paraphrased and not an actual quote)

I could be wrong. It is a very recent post. And not confirmed by staff.

If it has been deployed, it may be the reason more calls are failing.

5 Likes

Awesome feedback, thank you!

1 Like

Hey, in my view it has been like this ever since release, as I got a key on the 2nd day it was out and was experiencing the same thing. The API is really slow, so I end up using chatgpt or the playground. I hope they fix this quick as I’m sure many companies are integrating gpt-4 as we speak and speed is crutial for user experience.