InternalServerError 500 with gpt-3.5-turbo-1106

Hello, I’m getting a 500 internal server error each time I try to send a request to gpt-3.5-turbo-1106. I’m using parallel function-calling (tools).

Getting a response from gpt-3.5-turbo-1106 works fine if it doesn’t want to call a function. I’m confused.

{'error': {'message': 'The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID 204bf066ff0006d7e61e179879344523 in your email.)', 'type': 'server_error', 'param': None, 'code': None}}

Turns out, I had to increase the max tokens. I had them set to 30. For some reason, that caused an internal server error if the model wanted to call more than one function. The error happened when getting the initial response with client.chat.completions.create, before any processing of tool calls. InternalServerError 500 was a misleading error message.

2 Likes