Absolutely no reasoning information from gpt-5

Hi,

I am currently implementing the streaming protocol of the responses API in Python client-side (using the SDK) in detail (what a madness!). As with non-streaming, I again don’t see any reasoning information whatsoever, no matter how I set the “reasoning” parameter. The only thing I get is:

response.output_item.added, type=reasoning
response.output_item.done, type=reasoning, no content

None of reasoning_summary… or reasoning_text… appear.

About 25sec are spent between response.created and the reasoning events. This is very disappointing as my application cannot tell the user anything about the reasoning process and 25 seconds go by with nothing to show.

Has anyone successfully seen actual reasoning information streaming from the API?

1 Like

Argh! Never mind, it was my fault. The reasoning parameter didn’t go through to the API.

I had the same symptom – no reasoning-related events arriving over streaming when using a stored prompt. I did need to specify the “reasoning” parameter explicitly in the create() call; I had thought that having that set in the stored prompt was enough. That caused reasoning events to appear – but only with the sync client! Upgrading from openai-1.100.0 to openai-1.101.0 is now giving me reasoning events with the async client too.

2 Likes

It’s works fine now, albeit not consistently. the same prompt sometimes generates reasoning, other times it won’t. This is more likely in the ‘low’ effort setting or in the smaller models, apparently.

Especially in the context with function calling I do appreciate the reasoning very much. The model explains why it will call a function or not. This helps me with honing the prompts and the function description.

1 Like

I expreience the same issue. It sometimes works and sometimes doesn’t work.