Frequent API timeout errors recently

thanks, I have try catch and add timeout in my code. The api may be stable today.

1 Like

Probably is intermittent. If it times out continuously more than a certain amount of time it would change to something else.

Hi @georgei

Yeah, I hear you.

I’m been working for about an hour and it TIMEOUT about 70% of the time or so…

:slight_smile:

3 Likes

Timeouts started increasing around 1am Sunday and continued through Monday. Back to normal today. Here’s a graph of the increase. The blue line is timeout errors.

Is it working with retry decorator?

Call result = openai.ChatCompletion.create within a ThreadPoolExecutor. Try and catch future.result(10). When ThreadPoolTimeoutError is raised, use tenacity to retry on this exception

Just use plain http with retry in python. Looks like the read timeout is set to 600 sec for each connection internally by the openai.ChatCompletion library which is too bad. So I used regular http connection and it worked great. But I set connection time out, read timeout to 20,60 sec.

@retry(Exception, tries=5, delay=1, backoff=2, max_delay=120)
def call_openai_api(chat_gpt_command,max_tokens,presence_penalty):
OPENAI_API_KEY = os.getenv(“OPENAI_API_KEY”)
url = “https://api.openai.com/v1/completions
headers = {
“Content-Type”: “application/json”,
“Authorization”: f"Bearer {OPENAI_API_KEY}"
}
data = {
“model”: “text-davinci-003”,
“prompt”: chat_gpt_command,
“temperature”: 0.7,
“max_tokens”: max_tokens,
“top_p”: 1,
“frequency_penalty”: 0,
“presence_penalty”: presence_penalty,
“n”: 1
}
response = requests.post(url, headers=headers, json=data, timeout=(60, 120))
response.raise_for_status()
return response.json()

2 Likes

You can add parameter request_timeout, it will be pass to requests.post(timeout=xxx)

eg:

openai.ChatCompletion.create(
model=“gpt-3.5-turbo”,
messages=[
{
“role”: “user”,
“content”: prompt,
}
],
request_timeout=60,
)

I also keep getting errors, even-though I set the timeout to 10 minutes “request_timeout=600” anything that takes longer than 5 minutes times out.

1 Like

They are playing dirty header tricks, and thats all I"ll say…

Hey Cyntranic,

Would you mind elaborating please?

Hi I have some question
I only set the max tokens is 80
however the response is time out
I don’t know what the reason of the problem because it didn’t happen two weeks age
I use the python

the same question could be answer two weeks ago, and max tokens was 256 at that time
And when I use the same max tokens , it is time out so I lower my max tokens

Hi I built a library to deal with the problems mentioned in this thread. It handles timeout, rate limit, api keys errors

cc @crowdreactor @robeatz20

Use a user agent changer, proxy, API is fine.

Can you please share info about the library that you are talking about?

I’m seeing occasional timeouts. That isn’t a problem in itself, but the 600 second wait is too long. It would be nice to have a parameter for the timeout in the library so we can build better error handling around it.

2 Likes

I am handling it with the following:

except (openai.APIError,  # andling errors (for some of them I have not a clue of what they are! )
                requests.exceptions.Timeout, 
                APIConnectionError,
                openai.error.APIError, 
                openai.error.APIConnectionError, 
                openai.error.RateLimitError, 
                openai.error.ServiceUnavailableError, 
                openai.error.Timeout):

But it does not capture all the issues!

Look up in this post you’ll see the request_timeout parameter is passed to requests(timeout=request_timeout) and then except openai.error.timeout

1 Like