Does the responses API need reasoning summary from previous calls?

Is reasoning summary required in stateless Responses API calls?

Background

  1. I am sticking to ZDR, which means no previous_response_id in API calls
  2. I ask for encrypted reasoning content with reasoning.encrypted_content
  3. To help LLM continue from its previous reasoning, I include reasoning from previous calls

Question

Can I send only the encrypted reasoning content, and leave summary empty?

  • Seems like the summary is for human. Does it help LLM at all?
  • Summary is unavailable if reasoning.summary is not set. Can’t send it back in this case anyway.
  • Save some tokens

i.e. input[1].summary below

curl https://api.openai.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $KEY" \
  -d '{
    "model": "gpt-5",
    "reasoning": {
      "effort": "medium",
      "summary": "auto"
    },
    "include": [ "reasoning.encrypted_content" ],
    "store": false,
    "input": [
      {
        "role": "user",
        "type": "message",
        "status": "completed",
        "content": [{
          "type": "input_text",
          "text": "The old user message"
        }]
      },
      {
        "id": "rs_xxx",
        "type": "reasoning",
        "summary": [],
        "encrypted_content": "Encrypted reasoning of the previous call"
      },
      {
        "id": "msg_xxx",
        "type": "message",
        "status": "completed",
        "role": "assistant",
        "content": [{
          "type": "output_text",
          "text": "Output of the previous call"
        }]
      },
      {
        "role": "user",
        "type": "message",
        "content": [{
          "type": "input_text",
          "text": "The new user message"
        }]
      }
    ]
  }'

The previous response looks something like

{
  "id": "resp_xxx",
  "status": "completed",
  "error": null,
  "output": [
    {
      "id": "rs_xxx",
      "type": "reasoning",
      "encrypted_content": "...omitted",
      "summary": [
        {
          "type": "summary_text",
          "text": "summary 1"
        },
        {
          "type": "summary_text",
          "text": "summary 2"
        }
      ]
    },
    {
      "id": "msg_xxx",
      "type": "message",
      "status": "completed",
      "content": [
        {
          "type": "output_text",
          "annotations": [],
          "logprobs": [],
          "text": "The output"
        }
      ],
      "role": "assistant"
    }
  ],
  "previous_response_id": null,
    "effort": "medium",
    "summary": "detailed"
  },
  "store": false
}

Per the API reference, expanding “input” → response, for type: "reasoning" … “summary” is a required key in the object, containing non-optional {“type”: “summary_text”, “text”: “blah”}

The AI will not be taking in the summary - it sees the optional encrypted reasoning. The Responses API spec is a bit nonsense in what it has as required validated fields, just like requiring an internal tool ID for an internal tool where the “input” has both the prior tool call and tool response.

So, you can either echo back those reasoning items and empty the string, remove them if they are some of the multiple preliminary ones with only a summary but not encrypted reasoning, or just remove them completely, depending on the app and whether the AI model benefits.

You won’t be saving yourself any tokens by stripping them. The only way you’d save tokens is to immediately drop all reasoning items and not send them back with any encrypted content (drop that content a few turns later, you break cache, or OpenAI decides to drop them similarly when not around a tool call - and breaks the cache for you). So the easiest is just to replay what you received.