O3-mini "Unsupported parameter: 'temperature'

Hi Team

Facing the issue accessing o3 model,getting the below error:

Exception: Error code: 400 - {'error': {'message': "Unsupported parameter: 'temperature' is not supported with this model.", 'type': 'invalid_request_error', 'param': 'temperature', 'code': 'unsupported_parameter'}}

OpenAI version used : openai = "1.66.0"
O3 Model user : o3-mini-2025-01-31

Gone through the openai github discussions,but unable to fix the error even after updating the module.
https://github.com/ai-christianson/RA.Aid/issues/70

Any help would be highly appreaciated.

Thanks
Mahesh

I don’t believe o3-mini supports temperature.

Compare that to:

(but for the life of me can’t find the definitive statement in the docs!)

1 Like

Update: I think @merefield is right, looking at my code, I don’t use temperature for the reasoning models.

1 Like

I also found this:

“If you’re using a model that doesn’t support temperature you shouldn’t specify it.”

But darn, why is that so hard to find in the docs?

2 Likes

Yeah, I can’t find it either, hence my confusion. It must have existed at some point in the docs, as I always specify temperature, and I intentionally didn’t for these reasoning models.

3 Likes

Thanks for the support guys.

1 Like

Is there any reason why specifying temperature as None should also cause an error? I looked around and found this issue, but after updating the client the issue persists.

Allowing specifying None in this case enables the obvious idiom:

    temperature = None if model == OpenAIModels.o3_mini else 0.03
    ...
    # use temperature to get client, the exact same way for clients that support temperature or not

The only real way around this is to import the NOT_GIVEN object from openai._types, which should be hidden by convention as internal.

I needed to use the special sentinel value NOT_GIVEN when calling some internal beta streaming helper functions directly from the SDK library. In my specific case, I was using an optional parameter response_format.

OpenAI currently validates the allowed input parameters strictly for reasoning models. This validation means that certain parameters must be explicitly provided or explicitly omitted.

The reason OpenAI likely does this validation is to clearly communicate that sampling parameters (like temperature) have no effect on reasoning models. If OpenAI allowed None or null for these parameters now, existing code might break later. For example, if your current code adds a temperature UI checkbox to allow users to set a non-default temperature, the previously accepted None values could cause unexpected behavior on what you thought was a deliverable product.

In your example, it looks like you’re directly changing parameters within the function call itself. However, this approach isn’t ideal for logging or debugging. It also makes it difficult to create a clear “get code” button like the playground or exportable snippet. A better practice is to build a dictionary object containing your entire parameters and then unpack it into the function call using **kwargs.

You can still achieve conditional parameter selection clearly and safely using this approach. For example:

reasoning_params = (
    {"reasoning": {"effort": "low"}}
    if model in {"o1", "o3-mini"}
    else {"top_p": 0.5, "temperature": 0.9}
)

response = client.responses.parse(
    model=model,
    input=input,
    text_format=ExpertResponse,
    max_output_tokens=2048,
    store=False,
    **reasoning_params,
)

… or just move the conditional evaluation right into the function, if so bold.

This example shows up-front the parameters you intend to send in the case of reasoning model or not. I have similar myself, where this is sourced from an example script. It also clearly communicates intent regarding reliability and cost considerations in the parameters I still want to send if possible.

Similarly, you can use conditional logic to handle system/developer role assignments clearly, just as OpenAI currently does internally but is not promised. For example:

input = [
    {
        "role": "developer" if model.startswith("o") else "system",
        "content": [
            {
                "type": "input_text",
                "text": developer_input,
                # additional fields...
            }
        ],
    }
]

I can only justify the decision by example: raising an error the same way as were you to be sending a null to response_format on the Responses endpoint (which no longer exists). You might try to use it later if it is accepted as null.