openai.error.APIError: That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID ***************** in your message.) (Error occurred while streaming.)
I’ve been encountering this error a lot lately, especially when I use stream=True, I feel like it has become more frequent. Is there any solution you can suggest? It affects the users of my app very negatively. This error happens before the RateLimit is exceeded.
Catch openai API errors by return type (there about eight of them), and handle each uniquely. You can’t fix a bad API key by retrying, but you can retry the “overloaded” after a pause and also status bar the user.
I got this error from my server logs, it throws the error like this, so without error code. What should I do to catch it with the error code? Also, do you have a suggestion method or an example for this pause and reprocessing?