How to Solve GPT-5 ‘reasoning’ Error Without previous_response_id (Item ‘…’ of type ‘reasoning’ was provided without its required following item.)

Background

Previously, I used previous_response_id to maintain conversation context.
With gpt-5 models, I started getting errors like:

400 Item 'rs_xxx' of type 'reasoning' was provided without its required following item.

This happens when a reasoning item is referenced without its matching message item.
I solved this by removing previous_response_id and instead using item_reference to explicitly specify which items from previous responses to carry forward.


How to Use item_reference Correctly

1. Reasoning–Message Pairing

  • If a message item is immediately preceded by a reasoning item, those two form a pair.

  • Always include both IDs together and in order when referencing them.

  • If there is any other item in between, they are not a pair.
    Example:

rs_123456789   // reasoning_id
msg_987654321  // message_id

item_reference should include both:

[{"type": "item_reference", "id": "rs_123456789"},
 {"type": "item_reference", "id": "msg_987654321"}]

2. Code Interpreter Tool Calls

  • If code_interpreter was called, do not include reasoning_id.

  • Only reference the message_id.

3. Image Generation Tool Calls

  • If image_generation was called and a message appears:

    • Include the image_id(s) immediately before it.

    • If a reasoning_id exists just before the message, include that too.


Minimal Example Logic

function getItemReferences(items, calledCI, calledImg) {
  let refs = [];
  for (let i = 0; i < items.length; i++) {
    const cur = items[i], prev = items[i - 1];
    if (calledImg) {
      refs.push(cur.id);
      continue;
    }
    if (cur.type === 'message') {
      if (prev?.type === 'reasoning' && !calledCI) {
        refs.push(prev.id, cur.id);
      } else {
        refs.push(cur.id);
      }
    }
  }
  return refs.map(id => ({ type: 'item_reference', id }));
}


By following this approach, you can build a fully functional chat service that continues the conversation context without using previous_response_id, relying entirely on item_reference.

I know this works—because that’s exactly how I’m running mine. :white_check_mark:

8 Likes

Where did you obtain such information and technique.

The only mention of item_reference in the entirety of the OpenAI OpenAPI specification is in
create evals run -> data_source -> input_messages -> (InputMessagesItemReference).

where it is:

  • A ResponsesRunDataSource object describing a model sampling configuration.
  • Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.

I was running my chat service using previous_response_id, but when I added the GPT-5 model, reasoning became mandatory. No matter what I tried, I kept getting item-related errors.

So I tested every possible case and eventually discovered the solution myself—methodically going through each scenario until I found the answer.

Hey @suzzysuzzy,
How do you use the input references?
In my case agent is using gpt-5-mini model and using code intepreter tool + custom tools.
Below you can see input items:

I got error message: “Item ‘rs_68a44d94d0b4819ca2a719809530cc630d4282e261f441d6’ of type ‘reasoning’ was provided without its required following item.” which in my case is the item 12 in the list (there is msg and ci items after reasoning item). How could I use your idea in this case? Should I iterate over them and replace the inputs list with new list that contains only item references?

Thanks for your help

  • I only include both reasoning_id and message_id if there was no tool call like code_interpreter.

  • If a tool like code_interpreter was invoked, I found that trying to include reasoning_id leads to errors. In this case, I only reference the last message_id in the response.

Use item_reference inputs to chain responses.

This is Example.

Let’s say:

  • First user input → tool not used → you receive rs_first and msg_first

  • Second user input → code_interpreter is invoked → you receive rs_second, ci_id, msg_second, etc.

Then the next input would look like this:

{
  "model": "gpt-5-2025-08-07",
  "stream": true,
  "input": [
    { "type": "item_reference", "id": "rs_first" },
    { "type": "item_reference", "id": "msg_first" },
    { "type": "item_reference", "id": "msg_second" },
    {
      "type": "message",
      "role": "developer",
      "content": "Answer using markdown format."
    },
    {
      "type": "message",
      "role": "user",
      "content": [{ "type": "input_text", "text": "So, What can I do?" }]
    }
  ],
  "tools": [
    { "type": "image_generation", "partial_images": 1, "quality": "medium", "size": "1024x1024" },
    { "type": "web_search_preview", "search_context_size": "low", "user_location": { "type": "approximate", "country": "KR" }},
    { "type": "code_interpreter", "container": { "type": "auto" }}
  ],
  "parallel_tool_calls": false
}

Notice that msg_second is added alone without its reasoning pair, because a tool was involved.

  • I’m not yet using custom_tool so I can’t guarantee this works the same there.

  • But I tested this extensively with both normal responses and code_interpreter tool calls.

  • The key is: only pair reasoning_id with message_id if no tool was involved, and never include a reasoning_id alone.


:light_bulb: Why This Works

By following this pattern, I was able to maintain seamless context in GPT-5 conversations without previous_response_id, avoiding all item-pairing errors — even with tool calls.

Hope this helps others who are running into the same issue!

2 Likes

This is too inefficient. This does not differ on manually maintaining the context.

I hope they fix this issue with the usage of previous_response_id.

2 Likes

For what it’s worth: I had this same issue and when I stopped using structured outputs, the issue didn’t occur anymore.

I agree! What is the point of previous_response_id if you have to manually construct history in this tedious way?

Do you think the following strategy would work (?):

Use previous_response_id until a reasoning/message pair is detected, then switch to item_reference inputs for the next response request.

If the following response doesn’t have a reasoning/message pair, switch back to using previous_response_id

I want to implement my chat service so that conversation context is preserved using previous_response_id.
However, as I explained, starting from the GPT-5 model, using previous_response_id causes errors.
Even when I provide no item_reference at all, item-related errors still occur.
That’s why I had to come up with the workaround I described.

I still believe that the best way to maintain conversation context is through previous_response_id, and I sincerely hope this approach can be improved and supported again.

What do you mean by not using structured outputs?
Could you explain in more detail what approach you took instead?

I’m not using the structured output feature.
When I make a Responses API request, I don’t specify anything in the format field or similar parameters.

Were you able to find any solution apart from the reference one you shared ?

One question,

I tried this one and it works for second message and I get response successfully, however as I add 3rd message and pass another reference of second message id, it again starts to throw same error even with reference:

{
    "error": {
        "message": "Item 'msg_68ca****a6c078' of type 'message' was provided without its required 'reasoning' item: 'rs_68ca***da6c078'.",
        "type": "invalid_request_error",
        "param": "input",
        "code": null
    }
}

See the screenshots for reference:

I’m banging my head around from past 1 week, any help would be really helpful.

@SamAltman How do we build value layers with these silly issues.

@nikunj

Thanks in Advance.

I also have same doubt , and it brings a big question on reliability. because if we use this reference one then if user ask question like ‘can you look into that file again‘, it can’t actually do that.

@OpenAI_Support Do you consider it as an issue? And is there any expected resolution eta for it? And if not is it really worth to use responses api or do you have any plans to replace it as well like the assistants api?

At our company, we are seeing a lot of errors like this, especially when the code_interpreter is called. This situation should be easier to deal with or better documented, at a point that the API feels unusable @OpenAI_Support

1 Like