Internal server error on fine-tuned model

I’ve created a fine-tuned model based on gpt-3.5-turbo-1106. I’m using function calling and when I POST to the v1/chat/completions API I get an internal server error with the following return message:

“The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID 0500d7e2651ed3936d17c3726bbbb51b in your email.)”

I was not able to find a place to report this error at help.openai.com, so I’m posting it here in the hope that someone at OpenAI might investigate.

I believe the format of the request is correct and the same request sent to the base model or other fine-tuned models does not give the internal error… there’s something “special” about the particular fine-tuned model that fails.

The model is likely emitting tokens that the endpoint cannot parse.

Have you included a function specification in the API call? Functions for everything your model was trained on so you can see what it invoked?

Are you using it with a small max_tokens so it can’t write a complete function?

Are you trying to use tools instead of function method in your API call?


OpenAI must provide a completion endpoint and full logprobs even with functions, with encoding of special tokens for chat models, for developers that are not to be treated like adversaries, so they can diagnose what they invested in.