RateLimitErrors increased drastically in the last month?

Am I the only one that encounters way too many of these? :
Rate Limit Error: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through an Azure support request at: Microsoft Azure if the error persists.

I’m using GPT-4 32k deployed in Azure.
I can’t get anything done quickly, is everyone else also experiencing this?
Is there a trick to avoid the frequency of this even by a small amount?

The model is hosted on azure. This is the openai community. Did you send them a support request?

Also maybe combining multiple prompts in one request may help to get rid of rpm limits (azure has a limit of 200 rpm and 32k token).

Thanks @jochenschultz , will send azure a support request.
Combining multiple prompts doesn’t work for my use case unfortunately.

But fundamentally, this error is actually a “server error” and not really a “rate limit error”, right?

Azure even has other rate limits. So, yes that’s unrelated.