`` “message” : “Unsupported parameter: ‘temperature’ is not supported with this model.”
```
is now a thing with GPT-5 models (as they’re reasoning models) - how does that work when it might get handed off to a “non-thinking” model (as is done in ChatGPT)? Or is this just not a thing anymore unless we’re specifically reaching out to 4.1
This is a breaking change and absolutely unexpected, I really hope the add it as a significant number of real-world applications rely on more deterministic responses.
You experience the same behavior with other API reasoning models and attempts at top_p or temperature. You can send the defaults=1 successfully. Divergence from that will fail validation.
The playground is what is exposing an error an API programmer would rarely make with this understanding. You can easily switch model there, and be shown a call that will fail. I expect this would be tweaked up as things settle in.
One has to assume some internal task-based self-adjustment replacing your desired control.
max_tokens is not remapped either, because max_completion_tokens reflects a budget and not just what you receive.
I’m using GPT-4.1 for RP/world-building, and the “temperature” parameter has been very important for the quality of output. It even took a while to figure out the right “temperature” for each project. It’s such a big parameter that I find it odd that there’s no mention of it in the docs related to GPT-5 or the migration to it.
This is extremely annoying but what I found even more annoying is how OpenAI does not have a proper way for developers to give feedback about API issues like this one.
This was a bit of a surprise. I was unable. to find the fact that temp and top-p are not available for the GPT-5 series of models in the docs, which threw the update of my code. Working in healthcare setting, this is somewhat of a concern as locking down variability of responses is a valuable control for clinical risk.
After some digging and testing, I realized that GPT-5 does not support “temperature” at all, and it performs awfully at storytelling. However, GPT-5-chat-latest does support “temperature” and does better at storytelling but is not as intelligent as GPT-4.1 (GPT-5-chat-latest says a few dumb things that I never see GPT-4.1 say).
I’m sticking to GPT-4.1 for my storytelling app and hope it won’t be deprecated anytime soon!
GPT-5 has been a disappointment so far and seems to be for (vibe) coding mostly. Ugh!
I was facing issue with GPT-5 until I see your post
I notice that GPT-5 not work if you add temperature
but GPT-5 mini it working even with temperature
I don’t know why only with the orignal model not working but mini and nano working fine
Both temperature and presence_penalty are apparently no longer supported. Why would they remove these parameters? They are vitally important for achieving certain types of content.
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-5-chat-latest",
store=False,
frequency_penalty=0.05,
presence_penalty=0.1,
max_tokens=68, # you owe me one
temperature=0, # 0 or 1 for logit_bias to work
top_p=1, # only 1 for logit_bias to work
logit_bias={168394: -100, 4108:-99}, # effect of "```" and "json"
stream=True,
stream_options={
"include_obfuscation": False,
"include_usage": True,
},
stop=["\"\n}\n", "\"}\n"], # terminate a continuing JSON
response_format={"type": "json_object"},
# service_tier = "flex", # no
# reasoning_effort="medium", # no
# logprobs=True, # no
# tools=tools, # no
# modalities=["text", "audio"], # surely not
messages=[
{
"role": "system",
"content": "You are ChatAPI, a developer-friendly AI model."
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Send a JSON, with key 'chat_introduction'"
}
]
},
],
)
assistant_content = ""
for chunk in response:
if chunk.choices and chunk.choices[0].delta.content \
and not chunk.usage:
assistant_content += chunk.choices[0].delta.content
print(chunk.choices[0].delta.content, end="")
elif chunk.usage:
print("\n" + str(chunk.usage.model_dump()))
Also requires paying a premium price for something that ancient models can do, though I guess that’s sort of the point. GPT-5 was pretty clearly designed to increase OpenAI’s bottom line, and it definitely excels in that regard.
Leaving reason on default with gpt-5-nano, it costs almost 2000 tokens to generate a basic list of 6 phrases. Pretty insane. With gpt-4.1-nano, it takes 200 tokens. gpt-5 with reasoning_effort minimal, the outputs are inferior to gpt-4.1-nano.