thanks, I have try catch and add timeout in my code. The api may be stable today.
Probably is intermittent. If it times out continuously more than a certain amount of time it would change to something else.
Hi @georgei
Yeah, I hear you.
I’m been working for about an hour and it TIMEOUT about 70% of the time or so…
![]()
Timeouts started increasing around 1am Sunday and continued through Monday. Back to normal today. Here’s a graph of the increase. The blue line is timeout errors.
Is it working with retry decorator?
Call result = openai.ChatCompletion.create within a ThreadPoolExecutor. Try and catch future.result(10). When ThreadPoolTimeoutError is raised, use tenacity to retry on this exception
Just use plain http with retry in python. Looks like the read timeout is set to 600 sec for each connection internally by the openai.ChatCompletion library which is too bad. So I used regular http connection and it worked great. But I set connection time out, read timeout to 20,60 sec.
@retry(Exception, tries=5, delay=1, backoff=2, max_delay=120)
def call_openai_api(chat_gpt_command,max_tokens,presence_penalty):
OPENAI_API_KEY = os.getenv(“OPENAI_API_KEY”)
url = “https://api.openai.com/v1/completions”
headers = {
“Content-Type”: “application/json”,
“Authorization”: f"Bearer {OPENAI_API_KEY}"
}
data = {
“model”: “text-davinci-003”,
“prompt”: chat_gpt_command,
“temperature”: 0.7,
“max_tokens”: max_tokens,
“top_p”: 1,
“frequency_penalty”: 0,
“presence_penalty”: presence_penalty,
“n”: 1
}
response = requests.post(url, headers=headers, json=data, timeout=(60, 120))
response.raise_for_status()
return response.json()
