Previously, I used previous_response_id to maintain conversation context.
With gpt-5 models, I started getting errors like:
400 Item 'rs_xxx' of type 'reasoning' was provided without its required following item.
This happens when a reasoning item is referenced without its matching message item.
I solved this by removing previous_response_id and instead using item_reference to explicitly specify which items from previous responses to carry forward.
How to Use item_reference Correctly
1. Reasoning–Message Pairing
If a message item is immediately preceded by a reasoning item, those two form a pair.
Always include both IDs together and in order when referencing them.
If there is any other item in between, they are not a pair.
Example:
If code_interpreter was called, do not include reasoning_id.
Only reference the message_id.
3. Image Generation Tool Calls
If image_generation was called and a message appears:
Include the image_id(s) immediately before it.
If a reasoning_id exists just before the message, include that too.
Minimal Example Logic
function getItemReferences(items, calledCI, calledImg) {
let refs = [];
for (let i = 0; i < items.length; i++) {
const cur = items[i], prev = items[i - 1];
if (calledImg) {
refs.push(cur.id);
continue;
}
if (cur.type === 'message') {
if (prev?.type === 'reasoning' && !calledCI) {
refs.push(prev.id, cur.id);
} else {
refs.push(cur.id);
}
}
}
return refs.map(id => ({ type: 'item_reference', id }));
}
By following this approach, you can build a fully functional chat service that continues the conversation context without usingprevious_response_id, relying entirely on item_reference.
I know this works—because that’s exactly how I’m running mine.
Where did you obtain such information and technique.
The only mention of item_reference in the entirety of the OpenAI OpenAPI specification is in create evals run -> data_source -> input_messages -> (InputMessagesItemReference).
A ResponsesRunDataSource object describing a model sampling configuration.
Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.
I was running my chat service using previous_response_id, but when I added the GPT-5 model, reasoning became mandatory. No matter what I tried, I kept getting item-related errors.
So I tested every possible case and eventually discovered the solution myself—methodically going through each scenario until I found the answer.
Hey @suzzysuzzy,
How do you use the input references?
In my case agent is using gpt-5-mini model and using code intepreter tool + custom tools.
Below you can see input items:
I got error message: “Item ‘rs_68a44d94d0b4819ca2a719809530cc630d4282e261f441d6’ of type ‘reasoning’ was provided without its required following item.” which in my case is the item 12 in the list (there is msg and ci items after reasoning item). How could I use your idea in this case? Should I iterate over them and replace the inputs list with new list that contains only item references?
I only include both reasoning_id and message_idif there was no tool call like code_interpreter.
If a tool like code_interpreterwas invoked, I found that trying to include reasoning_id leads to errors. In this case, I only reference the last message_id in the response.
Use item_reference inputs to chain responses.
This is Example.
Let’s say:
First user input → tool not used → you receive rs_first and msg_first
Second user input → code_interpreter is invoked → you receive rs_second, ci_id, msg_second, etc.
Notice that msg_second is added alone without its reasoning pair, because a tool was involved.
I’m not yet using custom_tool so I can’t guarantee this works the same there.
But I tested this extensively with both normal responses and code_interpreter tool calls.
The key is: only pair reasoning_id with message_id if no tool was involved, and never include a reasoning_id alone.
Why This Works
By following this pattern, I was able to maintain seamless context in GPT-5 conversations without previous_response_id, avoiding all item-pairing errors — even with tool calls.
Hope this helps others who are running into the same issue!
I want to implement my chat service so that conversation context is preserved using previous_response_id.
However, as I explained, starting from the GPT-5 model, using previous_response_id causes errors.
Even when I provide no item_reference at all, item-related errors still occur.
That’s why I had to come up with the workaround I described.
I still believe that the best way to maintain conversation context is through previous_response_id, and I sincerely hope this approach can be improved and supported again.
Were you able to find any solution apart from the reference one you shared ?
One question,
I tried this one and it works for second message and I get response successfully, however as I add 3rd message and pass another reference of second message id, it again starts to throw same error even with reference:
{
"error": {
"message": "Item 'msg_68ca****a6c078' of type 'message' was provided without its required 'reasoning' item: 'rs_68ca***da6c078'.",
"type": "invalid_request_error",
"param": "input",
"code": null
}
}
I also have same doubt , and it brings a big question on reliability. because if we use this reference one then if user ask question like ‘can you look into that file again‘, it can’t actually do that.
@OpenAI_Support Do you consider it as an issue? And is there any expected resolution eta for it? And if not is it really worth to use responses api or do you have any plans to replace it as well like the assistants api?
At our company, we are seeing a lot of errors like this, especially when the code_interpreter is called. This situation should be easier to deal with or better documented, at a point that the API feels unusable @OpenAI_Support