How to get reasoning summary using GPT-5-mini in Agent SDK

When using GPT-5-mini via the Response API, it’s easy to get reasoning summary text from the returned response object. Note I have reasoning = {“effort”: “medium”, “summary”: “auto”} in the client.responses.create() function args. I scan the outputs in result.output for type == ‘reasoning’, and the text describing reasoning done is in output.summary.

I can’t find any way to get the reasoning summary when using the same model in the agent SDK. I know how to get the reasoning tokens used, and they are non-zero so some reasoning is going on. There are objects of type “reasoning_item” in result.new_items but they always have just None for content, including the raw_item object. Note I’m not setting reasoning effort anywhere so it must be at the default setting. I don’t know how to set the reasoning effort in the agent SDK. [Edit: I now know how to do that.] In the trace logs I see: “reasoning effort low” and “empty reasoning item”. But according to the usage object detail there is some reasoning going on, typically 64 tokens.

I’ve run a variety of cases that give responses that are good and must require reasoning effort with results always as described.

Update: I’ve set the agent’s model setting to medium reasoning effort and medium verbosity. This made no difference as far as retrieving reasoning summary or in what was given in the trace log.

This works for me:

thinker_agent = Agent(
    name="thinker_agent",
    model="gpt-5-mini",
    model_settings= ModelSettings(reasoning=Reasoning(effort="low", summary="detailed")),
    instructions="""Think of at least 2 ways to solve what the user asks""",
)
result = await Runner.run(thinker_agent, "Why is the sky blue?")
for o in result.raw_responses[0].output:
    print(o.to_dict())
    if o.type=='reasoning' and o.summary:
        print(o.summary[0].text)
    elif o.type=='message' and o.content:
        print(o.content[0].text)
2 Likes

Thank you! That is buried so deep, how did you figure it out? Is it in the docs somewhere or did you look at the git repo code?

I did discover it appears that setting reasoning summary to detailed is required. When I first tried it without that there was no reasoning summary result.

1 Like

I don’t quite remember where I looked, but I often use jupyter to understand better the output without having to run things over and over.

There is a vague reference in the docs. Another good reference is the examples in the github repository.

Yup, gpt-5 seems a bit more resistant to giving summaries, often I get an empty one unless I “force” it to think deeper.

1 Like

I’ve discovered a couple more things:

  1. The code you posted misses reasoning text in cases where more than one reasoning event occurs (because of fixed zero indexing).
  2. The exact same reasoning text can be obtained with completely different code provided the model reasoning summary is set to detailed.

A case that produces two reasoning text blurbs (the second of which is missed by the code you provided) is as follows:

YOU: search the web for up to 5 animals native to the Galapagos Islands and state whether that animal can fly

GPT: Here are five animals native to the Galápagos Islands and whether they can fly:

  • Blue-footed booby — Yes, can fly (a seabird that dives for fish).
  • Nazca (masked) booby — Yes, can fly (seabird found on several islands).
  • Galápagos penguin — No, cannot fly (the only penguin species north of the Equator; swims).
  • Flightless cormorant — No, cannot fly (a cormorant species that has lost the ability to fly).
  • Galápagos dove — Yes, can fly (a small endemic land bird).

Reasoning:

Searching for Galapagos animals

I see that we have a web search tool, so I should definitely use it. I’ll query for “animals native to the Galapagos Islands list.” I wonder if I can search for each animal individually, but I think the web tool will return a comprehensive list. It makes sense to follow the developer’s suggestion and call the search with that specific query about Galapagos animals. Let’s get started!

Identifying flying and non-flying animals

I’m thinking about listing up to five animals native to an area, focusing on whether they can fly. For instance, I can include the Blue-footed booby and the Nazca booby, both of which can fly. Then the Galapagos penguin and the Flightless cormorant, which cannot fly. The Galapagos dove can also take to the skies! I’ll make sure to keep the notes concise and direct, and citing sources could be optional.

Here are code snippets showing what I mean:

reasoning = ''
try:
    result = await Runner.run(agent, max_turns=20,
            input=inputs)
    
    # Your code "corrected" to catch more than one reasoning text:
    for raw in result.raw_responses:
        for item in raw.output:
            if item.type == 'reasoning' and item.summary:
                for o in item.summary:
                    reasoning += ('\n: ' + o.text + '\n')

    # Other method yeilds same result:
    for item in result.new_items:
        if item.type == "reasoning_item":
            for o in item.raw_item.summary:
                if o.type == "summary_text":
                    reasoning += ('\n: ' + o.text + '\n')
2 Likes

Yes, you are correct. The sample code I posted was just an example.
Depending on the model and request there can be multiple events.
If I recall correct the gpt-oss model has some nuances too.
It is essential to test and explore each particular case.
Thanks for sharing your findings!

2 Likes