This is an issue discovered during testing using the recently available gpt-5.1-chat-latest model.
The documentation indicates that gpt-5.1-chat-latest does not support reasoning.
However, during testing, I confirmed that the reasoning token was used.
Used code
const openai = new OpenAI({ apiKey: process.env.AIAI_OPENAI_API_KEY });
(async () => {
const result = await openai.responses.create({
model: 'gpt-5.1-chat-latest',
input: 'How much gold would it take to coat the Statue of Liberty in a 1mm layer?',
reasoning: { effort: null },
});
console.log(result);
})();
Exactly
What is happening here?
This makes absolutely no sense
Why is the gpt-5.1-chat-latest a reasoning only model when the models page does not say it has reasoning?? (none of the chat-latest were ever reasoning)
Cant switch from something I had using completions with no reasoning on gpt-5-chat-latest into gpt-5.1-chat-latest
Now gpt-5.1-2025-11-13 supports reasoning_effort none, but gpt-5.1-chat-latest does not (only medium?? )
nope, still confused about the reasoning of the chat-latest models since gpt-5.1-chat-latest
nobody seems to have an answer on this (not many seem to care)
The “chat” models are your copy of what runs on ChatGPT - running with that configuration. They don’t accept a reasoning effort parameter; logically you’d get the same internal reasoning (and pre-judgement) done on ChatGPT.
The actual reasoning tokens are obfuscated, <127 and you get billed 0.