Consistent API errors in gpt-4o

Hi all! We are experiencing a lot of API errors of this sort:

2024-05-14 08:52:38.337 | ERROR | core.call_models:call_assistant_stream:2028 - Error during llm request: APIError
2024-05-14 08:52:38.339 | ERROR | core.call_models:call_assistant_stream:2029 - Traceback (most recent call last):
File “/root/core/call_models.py”, line 1845, in call_assistant_stream
async for chunk in stream:
File “/usr/local/lib/python3.12/site-packages/openai/_streaming.py”, line 147, in aiter
async for item in self._iterator:
File “/usr/local/lib/python3.12/site-packages/openai/_streaming.py”, line 174, in stream
raise APIError(
openai.APIError: The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error.

The issue disappeared after switching the model back to gpt-4-turbo.

same here, I am wondering if the retry would solve the issue or not.