We are not able call the new chat model “gpt-5.1-chat-latest“ with the “stop“ sequence.
We receive the error “Unsupported parameter: ‘stop’ is not supported with this model.“.
The previous model “gpt-5-chat-latest“ runs with this parameter without problem.
Even the documentation doesn’t contain any note that this parametr is not longer supported.
What you’re seeing is expected behaviour for gpt-5.1-chat-latest The 5.1-series models run on a newer Chat Completions pathway, and a few of the older parameters, including stop, aren’t supported there anymore. The older gpt-5-chat-latest models still accept stop, which is why your previous setup worked.
For 5.1 models, OpenAI is steering people toward structured output controls instead of token-level stop sequences. The recommended alternatives are to use a response_format / JSON schema to shape and bound the output, or to use tool calls if you need a strict boundary between model output and application logic.
If you really need a custom stop marker, you can still have the model emit a sentinel string inside a schema field and truncate client-side, but native stop isn’t supported on 5.1.
The documentation could state this more explicitly, so thanks for surfacing it.
Now - what if I want to use “stop” to guard against the model going into a loop of nonsense structured output (like continues) - multiple JSON that could be stopped by a closing and re-opening sequence, strings that go nuts with linefeed or tabs that could be stopped with a stop sequence that says I don’t want 10 tabs…
gpt-5.1-chat-latest is not behaving like a non-reasoning chat completions model, as gpt-5-chat. Another symptom, truncate the output, what should be user-seen output with max tokens - get no output and what should have been delivered is captured as reasoning:
Guess what…If you liked controlling an AI model with a system message…and having it stop
As part of our continuous upgrade process, we are deprecating the following model: chatgpt-4o-latest. Access to this model will be shut off on Feb 16, 2026.
If it’s not available in ChatGPT anyway, I suppose there is no point in providing “latest” for API experiments.