GPT-5-Mini Responses API Empty Output Text Issue

:lady_beetle: GPT-5-Mini Responses API — output_text Empty (Model Output Structure Bug)

Issue Description

When using gpt-5-mini with the OpenAI Responses API, the output_text field returns an empty string ("") even though the API call succeeds.
This is not an SDK parsing issue — the model’s response currently contains only a reasoning item and no message item, so output_text cannot be populated.

Expected Behavior

response.output_text should contain the model’s generated text, same as gpt-4o-mini.

Actual Behavior

  • response.output_text"" (empty)

  • response.output[0].type"reasoning"

  • No ResponseOutputMessage is returned — only a ResponseReasoningItem.

Reproduction

from openai import AsyncOpenAI

client = AsyncOpenAI()

response = await client.responses.create(
    model="gpt-5-mini",
    input=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Say hello in one sentence."}
    ],
    max_output_tokens=50,
)

print(f"output_text: '{response.output_text}'")  # Empty!
print(f"output[0] type: {response.output[0].type}")  # reasoning

Comparison Table

Model output_text output[0].type Status
gpt-5-mini "" reasoning :cross_mark: Broken
gpt-4o-mini "Hello! How can I assist you today?" message :white_check_mark: Works

Root Cause

The gpt-5-mini model omits a message output item in some responses, returning only reasoning.
Since output_text is built from message items, it will be empty unless text output is explicitly requested.


Temporary Workaround

Force the output format to text so the model returns a message item:

response = await client.responses.create(
    model="gpt-5-mini",
    input=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Say hello in one sentence."}
    ],
    text={"format": {"type": "text"}},
    max_output_tokens=200,
)

print(response.output_text)  # Works now


Verified Fix Results

With the workaround:

  • output_text is populated correctly

  • output contains both reasoning and message items

  • Matches expected behavior from gpt-4o-mini


Environment

  • OS: Windows 11

  • Python: 3.11

  • OpenAI Python SDK: Latest

  • Affected Model: gpt-5-mini (likely other gpt-5 family models)

  • Works Fine On: gpt-4o-mini

Also happens with Rest API

You have a root cause, no?

(this is why you should be careful just cut & pasting from ChatGPT or equivalent)

Sorry I confirmed with them on Github that it’s not a Python SDK Issue.

But the issue is real.

1 Like

Has this been fixed? Im really confused

I am also experiencing this issue, enough for it to be a headache. I’m simply doing http calls to the Responses API in n8n, the response completes but but, in the OpenAI dashboard, shows as complete.

Note that reasoning models on the Responses API output an array of items, starting off with an indeterminate count of “reasoning” items (summaries, encrypted reasoning). You or your service must correctly parse the response object from Responses + Reasoning.

You also must allow enough time without a service timing out.


Is this a symptom that you also see with gpt-5 (full)? Or with o4-mini? With tools or without?

Retrieve the output by calling the API with the ID and see the content and if you have a response to be seen there. If so, it is an issue with your API use.