Gpt-4o-audio-preview: including tools with async causes 500 response

The chat completions endpoint seems to work fine with gpt-4o-audio-preview using tools in a simple script when not using async, but as soon as I include tools in an async function (in this particular case it’s in a Chainlit step), it causes an internal server error. Anybody else seeing this? I thought at first my tools object schema was out of whack, but as mentioned moving it over to a simple Python script without any async calls results in smooth sailing.

Error:

openai.InternalServerError: Error code: 500 - {‘error’: {‘message’: ‘The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID in your email.)’, ‘type’: ‘server_error’, ‘param’: None, ‘code’: None}}
2024-10-23 19:42:47 - HTTP Request: POST “HTTP/1.1 500 Internal Server Error”

Response when ran outside of async:

ChatCompletion(id=‘’, choices=[Choice(finish_reason=‘tool_calls’, index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role=‘assistant’, audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id=‘’, function=Function(arguments=‘{“location”:“Boston, MA”}’, name=‘get_current_weather’), type=‘function’)]))], created=1729713115, model=‘gpt-4o-audio-preview-2024-10-01’, object=‘chat.completion’, service_tier=None, system_fingerprint=‘’, usage=CompletionUsage(completion_tokens=17, prompt_tokens=80, total_tokens=97, completion_tokens_details=CompletionTokensDetails(audio_tokens=0, reasoning_tokens=0, text_tokens=17), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0, text_tokens=80, image_tokens=0)))

I was facing the same issue but when getting the full response back ( no streaming enable ), tool, and tool_choice enable, if a tool is needed, the completion returns a successfull response, but if not, it gives back your same result, I tried it with streaming instead, both async and sync, and it’s working with no problem, have you ?

1 Like

I actually ended up switching to realtime API for now, which is magical (and expensive), but I’ll probably be circling back around to this to try streaming as you mentioned for some other needs. Thanks for letting me know!

2 Likes

When you were using the gpt-4o-audio with tools, did you face any trouble making the model response using the tool called ?, in my case it doesn’t seem to be using it to response, it just acts as if the tool listed in the messages wasn’t there and response that it doesn’t have the knowledge to answer that particular question

Found the problem, was passing the wrong type into the content key when appending the tool to the messages