I completely recommend using a backoff library. I like retry · PyPI
You can even hook a “We’re sorry, it appears the server is currently overloaded” message after so much time
Been receiving a bunch of errors and timeouts today as well. Did not experience this last week.
Also getting 502s randomly.
raise error.APIError(
openai.error.APIError: HTTP code 502 from API (<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
)
I’ve also been getting ECONNRESET and Request failed with status code 502 errors in the past 10 hours. BTW openai api takes several minutes before throwing an error and timing out, which is a bit annoying.
It does. Sometimes, outages do not work the same way for all the models/endpoints (sometimes they do). @ljia31523 All the strategies are pretty much explained in this link that @ruby_coder already shared. Feel free to reach out if you still have doubts or do not know how to implement them.
Experiencing the same issue with both davinci-003 and davinci-002.
It’s almost been 3 days with the same issue. It’s making it really hard to work.
what is most annoying to me is:
I try to set a timeout at openai.ChatCompletion.create(..., timeout=10) hoping that that after 10seconds, if it is still “hunging”, please stop.
I am using @retry(wait = wait_random_exponential(min =1, max = 60), stop = stop_after_attempt(6)) def chatcompletion_with_backoff(**kwargs): to hope to handle this.
HOWEVER, there are no “timeout”. I have been monitoring the process. There are mulititimes that the the request being going on >minutes, and yet no Timeout is thrown out.
I’ve faced the same timeout issue. In the Python API, if you pass the parameter request_timeout to openai.ChatCompletion.create instead of timeout then it will throw a Timeout exception after n seconds. Hope it helps
But yes, I’ve been facing these issues as well…