Is API down for everyone or just me?

So…

It’s been 6 hours here and my API requests are still being timed out.

What’s going on? Is anyone else experiencing this? Status page says that everything’s okay.

2 Likes

Why not search the site and read the other topics on this before posting?

Or at least look at the topics before posting? Why not?

Thanks

:slight_smile:

1 Like

I did, but I’m flabergasted that more people aren’t posting this… :rofl:

1 Like

Amazing.

I have seen post after post on this topic all day, so much it is simply spam and noise.

Have a great day.

:slight_smile:

3 Likes

Facing same issue.

So the API is down. Right?

No. No right. The API is working for me with a proper retry/fallback strategy.
This is truly fascinating.

2 Likes

Are you using that for fine tunning?
I am trying to fine tune the model. but seems like its getting discounted every time.

Please see the status of the API here: https://status.openai.com/

The status says there’s no issues currently, yet everyone using 3.5 or 4 API is getting “timeout” when sending a request.

1 Like

Nope, no fine-tuning now. I’m talking about requests to https://api.openai.com/v1/chat/completions.

API timing out for me too, even with a retry policy with 5 tries.

Can you share further details? What is the timeout per retry? Do you fallback to other models? Do you have backoff?

It is 5 retries with exponential backoff (1, 2, 4, 8, 16), no I do not fallback to other models, but that’s a good idea. However not sure if DaVinci is also timing out and it might require different prompts than 3.5 Turbo.

Ideally 3.5 just works consistently.

And what is the custom timeout per API call?

The timeout is currently at 20 seconds.

@crowdreactor thanks a lot for sharing the details about the implementation. This is the only way we developers can help other debug their problems. So, for your case, I’d say:

  • 5 retries is probably too much, especially if you’re using the same model all the time.
  • 20s timeout is probably too short, especially if you’re asking for huge completions and you’re not streaming.

If your base model is gpt-3.5-turbo, I’d say to experiment with something like:

  • 1 call to turbo with timeout = 30s.
    Wait for 4s.
  • 1 call to turbo with timeout = 30s.
    Wait for 8s.
  • 1 call to davinci-003 with timeout = 30s.

And yeah, the output would obviously depend on the model. You can try to optimize your prompt for your model. Even if you don’t, it’s usually better to return something rather than nothing. Anyways, the actual implementation totally depends on your use case. You might want to set up even longer initial timeouts (1min or more), especially if your customers do not need online interaction with your app.

2 Likes

Alright thanks I will try that out. Unfortunately this is a web app and customers need to interact with it.

For what it’s worth, I’m using the same prompts I’ve always used, and they have been lightning fast before yesterday. It’s only since yesterday since these issues have started, and it doesn’t look like I’m the only one.

1 Like

You’re not. But this is not the first time that this has happened. And it won’t be the last one. Scaling up technology and predicting demand is not as easy as some people seem to believe. So our apps need to be ready to deal with partial or global outages. Because they will eventually happen.

3 Likes

same here, cannot access chat.openai.com for the whole day.

1 Like

The actual chat has gone down for me as well.

Reporting 429. Hopefully they get their scaling figured out.
Must be an incredible task considering how fast everything is growing.

1 Like