The chat completions endpoint seems to work fine with gpt-4o-audio-preview using tools in a simple script when not using async, but as soon as I include tools in an async function (in this particular case it’s in a Chainlit step), it causes an internal server error. Anybody else seeing this? I thought at first my tools object schema was out of whack, but as mentioned moving it over to a simple Python script without any async calls results in smooth sailing.
Error:
openai.InternalServerError: Error code: 500 - {‘error’: {‘message’: ‘The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID in your email.)’, ‘type’: ‘server_error’, ‘param’: None, ‘code’: None}}
2024-10-23 19:42:47 - HTTP Request: POST “HTTP/1.1 500 Internal Server Error”
Response when ran outside of async:
ChatCompletion(id=‘’, choices=[Choice(finish_reason=‘tool_calls’, index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id=‘’, function=Function(arguments=‘{“location”:“Boston, MA”}’, name=‘get_current_weather’), type=‘function’)]))], created=1729713115, model=‘gpt-4o-audio-preview-2024-10-01’, object=‘chat.completion’, service_tier=None, system_fingerprint=‘’, usage=CompletionUsage(completion_tokens=17, prompt_tokens=80, total_tokens=97, completion_tokens_details=CompletionTokensDetails(audio_tokens=0, reasoning_tokens=0, text_tokens=17), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0, text_tokens=80, image_tokens=0)))