According to the documentation, the following parameters are currently unsupported for reasoning models: temperature, top_p, presence_penalty, frequency_penalty, logprobs, top_logprobs, logit_bias, max_tokens, and Parallel tool calling.
Does this mean that parameters not mentioned should be considered supported?
When I use the “stop” parameter with o1-mini, I receive a 400 model_error. However, the same request works fine with the o1 model, so I suspect this might be a bug in o1-mini.
both model using api-version=2024-12-01-preview
{
"error": {
"message": "Unsupported parameter: 'stop' is not supported with this model.",
"type": "invalid_request_error",
"param": "stop",
"code": "unsupported_parameter"
}
}
“o1-mini-2024-09-12”
HTTP error occurred: Server error ‘500 Internal Server Error’ for url 'https://api.openai.com/v1/chat/completions
‘{\n “error”: {\n “message”: “The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.”,\n “type”: “model_error”,\n “param”: null,\n “code”: null\n }\n}’
’
o1-mini, good with stop sequence hard to produce in internal reasoning
Conclusion: working on OpenAI, even on all reasoning models.
Lesson: anything you’d manually tell the AI about as a stop sequence, a reasoning model will probably think about internally, and the stopping mid-reasoning will produce the “500 Internal Server Error”.
Thank you for your testing and insights!
Since the “stop” parameter works on OpenAI’s side, I’ll investigate further to see if this issue is specific to Azure OpenAI.