How to solve "BadRequestError: 400 Item 'rs_ ... ' of type 'reasoning' was provided without its required following item" error in Responses API

Hello everyone.

I’ve started to experience the error I mentioned in the title today. OpenAI Responses API is triggering that error when doing this:

  1. After the first prompt, I’m getting an output like this (yes, I’m using “Computer Use” feature):
[
    {
        "type": "reasoning",
        "id": "rs_67e298fa71e48191804f5ef7dc01a79f058d08e67dc0a7e4",
        "summary": []
    },
    {
        "type": "computer_call",
        "id": "cu_67e298fd07a88191891aa0cfd233b95f058d08e67dc0a7e4",
        "call_id": "call_vjl9d5iUZJntCiFa5CSz3E05",
        "action": {
            "type": "click",
            "button": "left",
            "x": 481,
            "y": 377
        },
        "pending_safety_checks": [],
        "status": "completed"
    }
]
  1. My application makes another request to the api with this input:
[
  {
    type: 'reasoning',
    id: 'rs_67e298fa71e48191804f5ef7dc01a79f058d08e67dc0a7e4',
    summary: []
  },
  {
    call_id: 'call_vjl9d5iUZJntCiFa5CSz3E05',
    type: 'computer_call_output',
    output: {
      type: 'input_image',
      image_url: 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABQAAAALQCAIAAABAH0oBAAAAAXNSR0IArs4c6QAAIABJREFUeJzsnemW3DayrQMpyZZkzbL74e856577fKetWbLUkipxfyQDsWMCwamq3F1cXnCIlUmA' ....
    },
    current_url: 'about:home'
  }
]

And this is the code snippet how I use the endpoint (in Node.JS):

response = await openai.responses.create({
                        model: "computer-use-preview",
                        previous_response_id: response?.id,
                        tools: [
                            {
                                type: "computer_use_preview",
                                ...dimensions,
                                environment: "browser", // other possible values: "mac", "windows", "ubuntu"
                            },
                            ...functions,
                        ],
                        input: input,
                        truncation: "auto",
});

Currently, for me it’s unknown how to deal with outputs of type “reasoning”, so I just add them back to the input because they don’t provide any information on what I should do with these objects. Sometimes the api returns a reasoning object alone in “response.output”, so I’ve used to add the reasoning object back to the input in such situations. It was working. But since today, it’s stopped working. And even if I add “reasoning” objects to the input always, while the documentation says it’s not required as “previous_response_id” is used, it’s throwing that error still.

Looking forward to get an answer from OpenAI devs as they know what’s going on.

Thanks.

1 Like

The reasoning object is for streamed ‘thinking’ progress. You can think of the response.output[n] as kind of an event stream needing type-parsing.

The call is very similar to how you repeat it back in your chat history append, and it has to be repeated in an id pairing with the function return.

Documented over here after many “expand” clicks.

The API reference conflates output-only types. It is probably that there is no returning a reasoning summary for any useful purpose on your end, but there is still an API validator for tool pairings acting on “id”.

Strangely, an hour after I posted the question it started working again. No changes have been made on my end except one where I made reasoning objects always to be added back to the input array which I did to try solve the error. In “working” conditions, I deal with reasoning objects like this:

input = [];
for (let output of response.output) {
    if (output.type === 'reasoning') {
        response.output.length === 1 && input.push(output);
    } else {
       // rest of the logic
   }
}

to handle an edge case I faced when the api returns a reasoning object alone. When it’s not alone, it is handled properly by passing a response id to “previous_response_id” when making an api call, so I simply ignore those objects.

Update: the edge case workaround doesn’t work, still the error is returned. I will try enable generating summaries to see what’s going on, or I should stop using “previous_response_id” approach and keep a big chat history in “input”.

Hi,

I ran into the same problem and I tried the exact same workaround with previous_response. The cause is a single “reasoning” item, which should not appear without its paired “computer_use” item, it seems. I tried dropping the reasoning message from the response chain, but then I ran into “no output found for tool call” (or similar).

I think this is a bug in the response.

In the meantime I helped myself by managing the message (item) list myself instead of chaining them with previous_response_id. That way I can easily ignore the malstructured response, which contains only a single reasoning item, and the model is happy again with the provided input.

@alpengeist I was running into this issue too, and what fixed it for me was only including the reasoning calls if they were also present alongside a computer call.

E.g.:
output only contains reasoning & text blocks = do not include reasoning

output contains only a reasoning block = do not include reasoning

output contains reasoning & computer call blocks = include reasoning in next request

For additional context, I’m not using previous_response_id, but managing the chat history manually.

Yes, that’s exactly the strategy I came up with as well. :+1:

I was encountering the same error with custom tools. I fixed it by managing the message history myself and removing all reasoning and __openai_function_call_ids__ items from the additional_kwargs.

I got rid of the error, but I’m still concerned about how this might potentially affect the model’s performance.

Any guidance would be greatly appreciated!

I did some more investigation on this, and I think I figured it out.

The responses API abstracts inputs and outputs into what they’re calling “items.” Items have different validation rules depending on their type. The reasoning item type appears to have a validation rule that causes it to be paired up with some other item in the output. They seem to be requiring an almost exact match for this, so you must populate the complete item, including its ID, or provide an item reference that points to the item. Item references take the following form:

{
  type: "item_reference",
  id: "id_of_the_item"
}

There does seem to be an allowance made for the status field, however. It appears to be ignored when using previous outputs as inputs.

The workaround where we only include reasoning items as input when we also had a computer use item only works for us in LangChain because the reasoning item in that case is always paired with the first computer use item. The real issue was that we weren’t populating the ID for the message item in cases when the reasoning item was produced as a result of a normal text generation.

This will be fixed in the next release of the LangChain OpenAI provider.

Incidentally, this is what _j was trying to tell us the very first reply to this thread.

1 Like