For all who are facing an issue on the new GPT-5 api over chat-completions api.
if you face this error - ‘Error code: 400 - {‘error’: {‘message’: “Unsupported value: ‘temperature’ does not support 0.2 with this model. Only the default (1) value is supported.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘temperature’, ‘code’: ‘unsupported_value’}}’
then remove the temperature from the api itself and let the api contain only this -
… Obviously. The point is that GPT-5 being a “reasoning” model isn’t really a valid reason for there to be no temperature parameter. It also lacks a presence_penalty parameter, which means that the outputs are likely going to be more similar than ever… which is terrible for writing content.
In what way does gpt-5* being reasoning models make controlling the randomness obsolete? Is the sampling from the output distribution per step only done deterministically for those? If not, why would controlling the amount of randomness not be useful?
I agree, I feel like GPT-5-thinking’s temperature is set way too high by default, which actually causes weird quirks and nonsensical writing. I even made a post about it and other people seemed to agree with my sentiment. I’m not sure why API users can’t control the temperature of thinking models.
In fact, certain parameters like top_p, temperature, presence_penalty and frequency_penalty are not welcomed for other model families: o1, o3, o4 and now gpt-5.
Although now i searched official documentation and i didn’t found nothing for this reason i’m here
In my code i use this for exclude those parameters for those model families (PHP):