Using a standard python request to call the API typically performs quite well, however randomly the request will hang indefinitely (been >2 hours and no response) - have anyone experienced a similar problem and if so how did you solve it?
Example below (This json data included a function call, but it both hangs randomly with and without functions included)
It should return some kind of error though. You can listen for that error to prompt it again, but you might want to follow these guidelines: OpenAI Platform
I’m also having this issue. It seems like if you wait long enough you get output. Is there anyway to get a faster response by requesting a very small amount of data.
I too am on this page (however many months later) cause requests to gpt-3.5-turbo-1106 frequently seem to stall today. Worked lightning fast earlier yesterday, but began having issues during the evening. The 4-preview API still seems to work fine.
I had the same issue and solved it with the optional parameter request_timeout and incremental timeout times. The approach is like this. This pattern is quite common for REST requests as timeouts may often occur because of overloads.
import openai
# preparations here
for timeout_secs in (5, 10, 20, 40):
try:
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[<here go your messages>],
request_timeout=timeout_secs
)
except Exception as exc:
print(f"Timed out after {timeout_secs} secs with exception, retrying...")
else:
if response:
break
print(f"Timed out after {timeout_secs} secs without response, retrying...")