In a multi turn conversation, I get an error when feeding back the reasoning summary using the Responses API.
Guide says:
Reasoning
object
A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input
to the Responses API for subsequent turns of a conversation if you are manually managing context.
So feeding back the reasoning of the previous turn as instructed, I now face this error.
This can be hacked by chaging the previous reasoning into a non-reasoning assistant message but does seem like an ugly hack…
{
“error”: {
“message”: “Item with id ‘rs_68345176de988198aafb95e60f1220b0048ee156292778f8’ not found. Items are not persisted when store
is set to false. Try again with store
set to true, or remove this item from your input.”,
“type”: “invalid_request_error”,
“param”: “input”,
“code”: null
}
}
The message that causes the issue is this one:
{
“input”: [
{
“content”: [
{
“text”: “hi”,
“type”: “input_text”
}
],
“role”: “user”,
“type”: “message”
},
{
“id”: “rs_68345176de988198aafb95e60f1220b0048ee156292778f8”,
“summary”: [
{
“text”: “The user has greeted me with a simple "hi." I don’t have much context, so I think it’s best to respond politely and offer my assistance. It could be the first message, so I want to make sure I’m welcoming and open. I’ll keep it friendly and ready to help however they need. This makes me feel like I’m making a positive first impression!”,
“type”: “summary_text”
}
],
“type”: “reasoning”
},
{
“content”: [
{
“text”: “Hello! How can I help you today?”,
“type”: “output_text”
}
],
“id”: “msg_6834517afb408198b78c4ccdc14102c5048ee156292778f8”,
“role”: “assistant”,
“status”: “completed”,
“type”: “message”
},
{
“content”: [
{
“text”: “hello”,
“type”: “input_text”
}
],
“role”: “user”,
“type”: “message”
}
],
“max_output_tokens”: 16384,
“model”: “o3-2025-04-16”,
“reasoning”: {
“effort”: “high”,
“summary”: “auto”
},
“store”: false,
“stream”: false,
“temperature”: 1
}