jazzg
9
Is there a way to avoid this error?
I got a loop that broke today after 5 minutes and I didn’t even notice when it did.
The best way to adjust, I think is trying to change your solution to avoiding invoke the API or classify your demands to reduce the times calling it to decrease the total amount of time
I came here looking to see if other people were encountering this. I guess it is reassuring that it’s not just me. But also unfortunate because I’m hoping to launch my app in a few weeks and hope this improves.
Was gonna try using an another model but for this feature I need chat API to keep context. Guess I’ll just have to wait it out like everyone else.
jazzg
12
I’m using the @backoff.on_exception(backoff.expo, openai.error.RateLimitError) from backoff library. Trying
for i in rlist:
try:
#mycode
except TimeoutError:
print("error")
continue
but it still breaks…
1 Like
How do you get those reports of your queries? Is there an OpenAI webpage for that? I dont see that fine grained results in OpenAI API
mark. Yes,Now he’s incredibly slow, and he’s getting slower and slower.I hope the official can improve it as soon as possible
2 Likes
jbackx
15
Performance of the OpenAI API is horrible for the moment. Are there plans to improve this soon because this instability in performance is blocking the roll out of our project.
4 Likes
seltz
16
API responses have been consistently 20-50 seconds for about a week now- unusable when ChatGPT itself seems faster than it has ever been
2 Likes
Is there a way to get someone from OpenAI to comment on this? Why are paying customers being rate limited into unusable latencies? The model is supposed to be “turbo” 30-40 seconds is not very “turbo” for some 100s of tokens. The API is wayyyyyy slower than the free chat? Why? I doubt it’s a technical issue, is that a strategic decision to limit developers? If so, I think OpenAI should be more “open” with the community
3 Likes
I think there are too many people using OpenAI API services. Like, I am a bit shocked that people are now saying ‘gpt-3.5-turbo’ is slow, because I remember ‘gpt-3.5-turbo’ had a good speed, with +1000 tokens. So… I feel like the server is packed now.
But my issue is more serious, because my company is using gpt-4, and gpt-4 is way slower though it is accurate. We are about to launch this internally, and I can imagine that our customer service team might say the chat bot is too slow and complain. 
1 Like
totally suffered from the same problem, awesome response, but awful slow 
I just signed up for an OpenAI subscription myself, and I expected a similar response time as what ChatGPT is using, but using the gpt-3.5-turbo model, the response times are 30-60 seconds, or timeout completely.
I’m only using it for a demo application, but it’s almost unusable due to this performance, and it’s extra disappointing I had to pay for this to experience this.
1 Like
Actually, now we’re lucky to have the ‘stream’ option, which allows users to see the GPT generating answers. With this way, people can feel like they’re not stuck.
Still slow, but now is much better.
How do you achieve the stream option to run?
stream
boolean
Optional
Defaults to false
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. See the OpenAI Cookbook for example code.
https://platform.openai.com/docs/api-reference/chat
1 Like
I am also getting same delay . it took 60 s for response. is this resolved
1 Like
ml2
26
#chatgpt
gpt-3.5-turbo model API not give same respose as they give resonse in web chat gpt
How i can get same result in my Python API,
I want long respose as gpt web In my python API
Anyone can guide me?
APIConnectionError: Error communicating with OpenAI: HTTPSConnectionPool(host=‘api.openai.com’, port=443): Max retries exceeded with url: /v1/completions (Caused by SSLError(SSLCertVerificationError(1, ‘[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)’)))
why this error comes
still 4 times slower that GPT site, 2 times slower than playground, is there a way to avoid this throttle by paying more ?
1 Like