'concise' reasoning summary returns null since March 27

Starting March 26 or 27, summary=concise has started to return empty summarys on gpt-5.2, 5.4, and 5.4-mini. summary=auto/detailed continues to work fine.

This is across multiple projects and API keys on my account. I am a verified developer; have tried without passing tools=, and it persists across all reasoning effort levels. At a loss.

Here is an example request:

resp = client.responses.create(
input='Count the the number of "h"s needed to spell the answer to 674*324 (in English).',
model="gpt-5.2",
reasoning={'effort': 'medium', 'summary': 'concise'},
include=['reasoning.encrypted_content']
)

which returns:

Response(id='resp_06ccb800692bf3fb0069ca9e1eae6c8196bd26b90916c832f6', created_at=1774886430.0, error=None, incomplete_details=None, instructions=None, metadata={}, model='gpt-5.2-2025-12-11', object='response', output=[ResponseReasoningItem(id='rs_06ccb800692bf3fb0069ca9e1f3f208196b3f2a4a9789f24ec', summary=[], type='reasoning', content=None, encrypted_content='[trimmed]', status=None), ResponseOutputMessage(id='msg_06ccb800692bf3fb0069ca9e2427ec8196b3b79a10506ead01', content=[ResponseOutputText(annotations=[], text='\\(674 \\times 324 = 218{,}376\\)\n\nSpelled in English: **“two hundred eighteen thousand three hundred seventy-six”**\n\nCount of **h**’s:\n- **hundred** (1) × 2 = 2  \n- **thousand** (1) = 1  \n- **three** (1) = 1  \n\nTotal: **4**', type='output_text', logprobs=[])], role='assistant', status='completed', type='message', phase=None)], parallel_tool_calls=True, temperature=1.0, tool_choice='auto', tools=[], top_p=0.98, background=False, completed_at=1774886436.0, conversation=None, max_output_tokens=None, max_tool_calls=None, previous_response_id=None, prompt=None, prompt_cache_key=None, prompt_cache_retention=None, reasoning=Reasoning(effort='medium', generate_summary=None, summary='concise'), safety_identifier=None, service_tier='default', status='completed', text=ResponseTextConfig(format=ResponseFormatText(type='text'), verbosity='medium'), top_logprobs=0, truncation='disabled', usage=ResponseUsage(input_tokens=28, input_tokens_details=InputTokensDetails(cached_tokens=0), output_tokens=391, output_tokens_details=OutputTokensDetails(reasoning_tokens=302), total_tokens=419), user=None, billing={'payer': 'openai'}, frequency_penalty=0.0, presence_penalty=0.0, store=True)

2 Likes

“concise” is only for the computer-use model.

Try “auto” to get whatever the model supports.

3 Likes

That is not true. I know this is all poorly documented, but

  1. ‘concise’ with gpt-5.2 was working perfectly for several weeks before this regression.
  2. the changelog reports “What’s new in 5.2 is a new xhigh reasoning effort level, concise reasoning summaries, and new context management using compaction.” - see OAI API changelog – this forum won’t let me include links.

Streaming response times are now 1-2 seconds longer after having switched to ‘detailed’/’auto’ (which are the same for gpt-5.2), so this is a bad regression for me.

2 Likes

Welcome to the developer community, @JD42 — and thank you for the clear report and detailed repro.

Based on the current documentation, there does appear to be some inconsistency here.

The reasoning guide states:

Different models support different reasoning summary settings. For example, our computer use model supports the concise summarizer, while o4-mini supports detailed. To access the most detailed summarizer available for a model, set the value of this parameter to auto. auto will be equivalent to detailed for most reasoning models today, but there may be more granular settings in the future.

At the same time, the API reference says:

summary: optional "auto" or "concise" or "detailed"

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model’s reasoning process. One of auto, concise, or detailed.

concise is supported for computer-use-preview models and all reasoning models after gpt-5.

Given your example and the fact that auto and detailed are still working while concise is returning an empty summary=[], this looks like it’s worth investigating further. I’m going to take a closer look and follow up here once I have more clarity.

UPDATE

I was able to reproduce this issue on my end. Reasoning summaries are returning empty on models gpt-5.2, gpt-5.4, and gpt-5.4-mini when summary is set to concise. I’ve forwarded this to the team at OpenAI.
Thanks again for flagging it.

5 Likes

Thanks for confirming you can also repro. I also opened a support ticket two days ago but it hasn’t received any attention yet. Here is repro with curl showing the output item with type=reasoning and summary=[].

reproduction in curl
% curl -s https://api.openai.com/v1/responses \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.2",
    "input": "Count the h letters in the English spelling of 674*324",
    "reasoning": {"effort": "medium", "summary": "concise"}
  }'

Output:
{
  "id": "resp_0999f87d4fb271620069cd57a5d3ac8195b550823e4a0b4c6a",
  "object": "response",
  "created_at": 1775064997,
  "status": "completed",
  "background": false,
  "billing": {
    "payer": "openai"
  },
  "completed_at": 1775065002,
  "error": null,
  "frequency_penalty": 0.0,
  "incomplete_details": null,
  "instructions": null,
  "max_output_tokens": null,
  "max_tool_calls": null,
  "model": "gpt-5.2-2025-12-11",
  "output": [
    {
      "id": "rs_0999f87d4fb271620069cd57a638488195befbb34df9765136",
      "type": "reasoning",
      "summary": []
    },
    {
      "id": "msg_0999f87d4fb271620069cd57a9bf748195a6bf8c7dc9cf1280",
      "type": "message",
      "status": "completed",
      "content": [
        {
          "type": "output_text",
          "annotations": [],
          "logprobs": [],
          "text": "\\(674 \\times 324 = 218{,}376\\)\n\nIn English: **\u201ctwo hundred eighteen thousand three hundred seventy-six\u201d**\n\nCount of the letter **h**:\n- **hundred** (1) appears twice \u2192 2  \n- **thousand** (1) appears once \u2192 1  \n- **three** (1) appears once \u2192 1  \n\nTotal: **4**"
        }
      ],
      "role": "assistant"
    }
  ],
  "parallel_tool_calls": true,
  "presence_penalty": 0.0,
  "previous_response_id": null,
  "prompt_cache_key": null,
  "prompt_cache_retention": null,
  "reasoning": {
    "effort": "medium",
    "summary": "concise"
  },
  "safety_identifier": null,
  "service_tier": "default",
  "store": true,
  "temperature": 1.0,
  "text": {
    "format": {
      "type": "text"
    },
    "verbosity": "medium"
  },
  "tool_choice": "auto",
  "tools": [],
  "top_logprobs": 0,
  "top_p": 0.98,
  "truncation": "disabled",
  "usage": {
    "input_tokens": 19,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 343,
    "output_tokens_details": {
      "reasoning_tokens": 253
    },
    "total_tokens": 362
  },
  "user": null,
  "metadata": {}
}
3 Likes