Are you developing using the responses endpoint?
response = client.responses.create()
That is in contrast to the older chat completions endpoint:
response = client.chat.completions.create()
There are disallowed models, likely not supported because they don’t align with the endpoint capabilities, such as developer message (“instructions”) support.
Those include “o1-preview” and “o1-mini”
The model name “o1” and “o3-mini” is available, however its deployment was released gradually and might not be automatic. Try generating a new project API key if you do see them in your own “limits” screen.
The annoyance with reasoning models is then that this shortcut to the output_text, good for examples, doesn’t work on that model, because the first output item is a reasoning summary (that cannot actually be requested):
response.output[0].content[0].text
While the endpoint is better for its streaming events, this code will get you a reasoning model’s output language after a Python SDK call:
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="o3-mini",
max_output_tokens=2048,
store=False,
reasoning={
"effort": "low",
#"generate_summary": "detailed", # computer model only
},
input=[
{
"role": "developer",
"content": [
{
"type": "input_text",
"text": "An AI assistant writes one sentence.",
}
]
},
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "Any fruit better than Sumo Citrus?",
}
]
}
]
)
for out_item in response.output:
# Safely iterate if the attribute "content" exists
if hasattr(out_item, "content"):
for element in out_item.content:
# Print text if available without raising errors
if hasattr(element, "text"):
print(element.text)