How to use reasoning.encrypted_content with store=False (stateless)

I would like to use the responses API so that it is stateless and to re-use the reasoning tokens.

With this code:

rsp = openai_client.responses.create(
        model='o4-mini',
        input=messages,
        store=False,
        tool_choice='none',
        include=['reasoning.encrypted_content']
)

I can get the reasoning tokens from

encrypted_content = rsp.output[0].encrypted_content

But how do I pass these reasoning tokens to the next call to openai_client.responses.create()?

I tried this:

rsp = openai_client.responses.create(
        model='o4-mini',
        input=messages,
        store=False,
        tool_choice='none',
        include=['reasoning.encrypted_content'],
        reasoning=dict{'encrypted_content': encrypted_content}
)

but that is not an allowed parameter.

Any help would be appreciated!

Follow the API reference for Responses.

Expand this tree:

input → input item list (array) → item → reasoning → encrypted_content

This seems closer:

prev_reasoning = rsp.reasoning

and then:

rsp = openai_client.responses.create(
        model='o4-mini',
        input=messages,
        store=False,
        tool_choice='none',
        include=['reasoning.encrypted_content'],
        reasoning=prev_reasoning
)

No errors, but the model seems to not have any context.

It seems that I need to include previous messages in input.

This is what messages looks like:

    messages = [
        {'role': 'system', 'content': sys_prompt},
        {'role': 'user', 'content': user_prompt}
    ]

I also tried

    prev_reasoning = rsp.reasoning 
    messages.append({'role': 'reasoning', 'content': prev_reasoning})

That role is not allowed (‘assistant’, ‘system’, ‘developer’, and ‘user’ are allowed) and none of the allowed roles seem applicable.

You will see that the tree of input items in the API reference for “input” splits between “input message” and merely “context item”.

Despite that the assistant produces something like reasoning or tool calls at the same time that it can produce a response to a user, it is being delivered in another list item, another event.

You have to pass these back in the same format they were delivered to you by the API.

Capture and observe closely the list of output items in a non-stream response. You will see an array (list) of items is delivered to you in parallel hierarchy sequentially. This is what you amend the input list with. Not strictly input message objects with “role”.

    prev_reasoning = rsp.reasoning 
    messages.append(prev_reasoning)

This is my best understanding of your message, but that gives an error.

The API reference, again, shows you the type of reasoning object that must be passed back…

This is INPUT:

You can follow the pattern of returning function calls for the encrypted reasoning objects at the same level.

Basically, you take the reasoning element in the response.output and pass it as a json to the already existing conversation.

In this example I just tossed the entire output as the python SDK can handle it, but if you need to do a REST request you must use an equivalent JSON.

Here is a short example.
inputs = [{  "role": "user", "content": "Explain why is the sky blue." } ]

response_1 = client.responses.create(
  model="o4-mini",
  reasoning= {"effort":"low"},
  input=inputs,
  include=['reasoning.encrypted_content'],
  store=False,
)

print('#1 First output')
for o in response_1.output:
    print(o.type,o.to_dict())

#follow up
inputs.extend(response_1.output)
inputs.append({  "role": "user", "content": "Can you summarize it in one sentence?" })
response_2 = client.responses.create(
  model="o4-mini",
  reasoning= {"effort":"low"},
  input=inputs,
  include=['reasoning.encrypted_content'],
  store=False,
)

print('#2 Second output')
for o in response_2.output:
    print(o.type,o.to_dict())
3 Likes

Thank you! This is really helpful! I was thinking that the encrypted reasoning would include some representation of the state so that you wouldn’t need to add the previous agent response, but it looks like you need to include both.

Yeah, since we are going stateless it seems to contain the full reasoning, but encrypted to prevent other companies to distill the model.