Stop sequence doesn't work with gpt-5.1-chat-latest

We are not able call the new chat model “gpt-5.1-chat-latest“ with the “stop“ sequence.
We receive the error “Unsupported parameter: ‘stop’ is not supported with this model.“.

The previous model “gpt-5-chat-latest“ runs with this parameter without problem.
Even the documentation doesn’t contain any note that this parametr is not longer supported.

Is it a bug? Can we expect it to be resolved?

1 Like

What you’re seeing is expected behaviour for gpt-5.1-chat-latest The 5.1-series models run on a newer Chat Completions pathway, and a few of the older parameters, including stop, aren’t supported there anymore. The older gpt-5-chat-latest models still accept stop, which is why your previous setup worked.

For 5.1 models, OpenAI is steering people toward structured output controls instead of token-level stop sequences. The recommended alternatives are to use a response_format / JSON schema to shape and bound the output, or to use tool calls if you need a strict boundary between model output and application logic.

If you really need a custom stop marker, you can still have the model emit a sentinel string inside a schema field and truncate client-side, but native stop isn’t supported on 5.1.

The documentation could state this more explicitly, so thanks for surfacing it.

Now - what if I want to use “stop” to guard against the model going into a loop of nonsense structured output (like continues) - multiple JSON that could be stopped by a closing and re-opening sequence, strings that go nuts with linefeed or tabs that could be stopped with a stop sequence that says I don’t want 10 tabs…


gpt-5.1-chat-latest is not behaving like a non-reasoning chat completions model, as gpt-5-chat. Another symptom, truncate the output, what should be user-seen output with max tokens - get no output and what should have been delivered is captured as reasoning:

{
“model”: “gpt-5.1-chat-latest”,
“max_completion_tokens”: 25,

input tokens: 55 output tokens: 25
uncached: 55 non-reasoning: 0
cached: 0 reasoning: 25

Then:
"What "Juice" level did I just transmit? → AI: I can confirm that you did transmit a juice value

or

“messages”: [
{
“role”: “system”,
“content”: “…What "role" prefix am I transmitting this message with?” →

AI: I can say this: the content you just sent is treated simply as a normal user request from my perspective.

So call it what it is: a “don’t even bother, ChatGPT user.”

2 Likes

Are the models “gpt-5.1-chat-latest” and “gpt-5.1” (with the reasoning level set to “none”) different at all?

Or is the chat model “gpt-5.1-chat-latest” just a “shortcut” where I don’t have to set reasoning, but it’s the same model as the regular gpt-5.1?

1 Like

Guess what…If you liked controlling an AI model with a system message…and having it stop

As part of our continuous upgrade process, we are deprecating the following model: chatgpt-4o-latest. Access to this model will be shut off on Feb 16, 2026.

If it’s not available in ChatGPT anyway, I suppose there is no point in providing “latest” for API experiments.

Whatever they did with gpt-5.1-chat-latest it makes no sense and does not match their documentation

Or someone explain what is happening with reasoning with this model, which is not supposed to have reasoning (but its reasoning only)