I needed to use the special sentinel value NOT_GIVEN
when calling some internal beta streaming helper functions directly from the SDK library. In my specific case, I was using an optional parameter response_format
.
OpenAI currently validates the allowed input parameters strictly for reasoning models. This validation means that certain parameters must be explicitly provided or explicitly omitted.
The reason OpenAI likely does this validation is to clearly communicate that sampling parameters (like temperature
) have no effect on reasoning models. If OpenAI allowed None
or null
for these parameters now, existing code might break later. For example, if your current code adds a temperature UI checkbox to allow users to set a non-default temperature, the previously accepted None
values could cause unexpected behavior on what you thought was a deliverable product.
In your example, it looks like you’re directly changing parameters within the function call itself. However, this approach isn’t ideal for logging or debugging. It also makes it difficult to create a clear “get code” button like the playground or exportable snippet. A better practice is to build a dictionary object containing your entire parameters and then unpack it into the function call using **kwargs
.
You can still achieve conditional parameter selection clearly and safely using this approach. For example:
reasoning_params = (
{"reasoning": {"effort": "low"}}
if model in {"o1", "o3-mini"}
else {"top_p": 0.5, "temperature": 0.9}
)
response = client.responses.parse(
model=model,
input=input,
text_format=ExpertResponse,
max_output_tokens=2048,
store=False,
**reasoning_params,
)
… or just move the conditional evaluation right into the function, if so bold.
This example shows up-front the parameters you intend to send in the case of reasoning model or not. I have similar myself, where this is sourced from an example script. It also clearly communicates intent regarding reliability and cost considerations in the parameters I still want to send if possible.
Similarly, you can use conditional logic to handle system/developer role assignments clearly, just as OpenAI currently does internally but is not promised. For example:
input = [
{
"role": "developer" if model.startswith("o") else "system",
"content": [
{
"type": "input_text",
"text": developer_input,
# additional fields...
}
],
}
]
I can only justify the decision by example: raising an error the same way as were you to be sending a null to response_format
on the Responses endpoint (which no longer exists). You might try to use it later if it is accepted as null.