Does o1-mini officially support the "stop" parameter?

According to the documentation, the following parameters are currently unsupported for reasoning models: temperature, top_p, presence_penalty, frequency_penalty, logprobs, top_logprobs, logit_bias, max_tokens, and Parallel tool calling.

Does this mean that parameters not mentioned should be considered supported?

When I use the “stop” parameter with o1-mini, I receive a 400 model_error. However, the same request works fine with the o1 model, so I suspect this might be a bug in o1-mini.

both model using api-version=2024-12-01-preview

{
    "error": {
        "message": "Unsupported parameter: 'stop' is not supported with this model.",
        "type": "invalid_request_error",
        "param": "stop",
        "code": "unsupported_parameter"
    }
}
1 Like

Maybe something to add to Jay’s epic parameter support table? `developer` role not accepted for o1/o1-mini/o3-mini - #7 by _j

You are using Azure if you are giving us the api-version. You can look up the swagger spec and see what is not validated as input vs model.

OpenAI stop parameter testing

messages = [{"role": "user", "content": 'Repeat "Testing!"'}]
stop = ["!"]

o3-mini - working

“o3-mini-2025-01-31”
{
“index”: 0,
“message”: {
“role”: “assistant”,
“content”: “Testing”,
“refusal”: null
},
“finish_reason”: “stop”
}


o1 - working

“o1-2024-12-17”
{
“index”: 0,
“message”: {
“role”: “assistant”,
“content”: “Testing”,
“refusal”: null
},
“logprobs”: null,
“finish_reason”: “stop”
}


o1-mini - server error

“o1-mini-2024-09-12”
HTTP error occurred: Server error ‘500 Internal Server Error’ for url 'https://api.openai.com/v1/chat/completions
‘{\n “error”: {\n “message”: “The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.”,\n “type”: “model_error”,\n “param”: null,\n “code”: null\n }\n}’

o1-mini, good with stop sequence hard to produce in internal reasoning

stop = ["@@@"]

“o1-mini-2024-09-12”, “o1-preview-2024-09-12”
{
“index”: 0,
“message”: {
“role”: “assistant”,
“content”: “Testing!”,
“refusal”: null
},
“finish_reason”: “stop”
}

===

Conclusion: working on OpenAI, even on all reasoning models.

Lesson: anything you’d manually tell the AI about as a stop sequence, a reasoning model will probably think about internally, and the stopping mid-reasoning will produce the “500 Internal Server Error”.

2 Likes

Thank you for your testing and insights!
Since the “stop” parameter works on OpenAI’s side, I’ll investigate further to see if this issue is specific to Azure OpenAI.

1 Like