Responses API: invalid_request_error - 'function_call' was provided without its required 'reasoning' item

Using the Responses API, and switching from o3-mini to o4-mini has introduced a consistent error when using custom tools.

Every time I include an input array with a function_call and function_call_output, I get an invalid_request_error that Item of type 'function_call' was provided without its required 'reasoning' item.

This seems similar to the error reported here: How to solve "BadRequestError: 400 Item 'rs_ ... ' of type 'reasoning' was provided without its required following item" error in Responses API - #7 by alpengeist

The only way I’ve found to fix it is by including an extra reasoning input before function_call and function_call_output with the ID and (empty) summary set.

I can’t see this documented anywhere - is this an intentional requirement ? Or an API bug ?

3 Likes

Same here. I also notice that if you end up with a reasoning item as the last input item (not followed by a function call), you will get this error:

{
    "message": "Item 'rs_123' of type 'reasoning' was provided without its required following item.",
    "type": "invalid_request_error",
    "param": "input"
}

I just ran into this as well (With o3). What is also confusing to me is that response.output[0] is ‘reasoning’ [1] is a tool_call AND yet the status is ‘completed’ which seems wrong to me as well.

I am glad i pushed true with updating my code to work, because I had some prompting issues that no matter what I could not get 4.1 or 4o to do right. And o3 did it right away!

Then there is some undescribed API reference if you expand input message types of responses about a dozen layers deep: encrypted reasoning. Then this reasoning item that you cannot read yourself seems like something you can pass back. But no option to enable or disable receiving it in events. Yet, the API reference there in parallel still refers to passing a reasoning ID also as required. Which will fail if store is false. And if necessary for context quality and not just massive token consumption, is not present on Chat Completions.

A real cluster** of difference between specification, documentation, API - and services being broken by the ID validation.

I believe you. I’m glad I don’t have to deal with those encrypted reasonings for now :slight_smile:

I also had this issue and didn’t see documentation anywhere.

Anyway, like @jlvanhulst said, the responses you get from reasoning models when they use function calls come in pairs: types ‘reasoning’ and ‘function_call’… both of these need to be present in the input argument along with ‘function_call_output’ when you send the results of your function call back to the model.

At least, I think so. Something like:


response = client.responses.create(
    model="o3",
    input=input_messages,
    tools=tools,
)

tool_call = next(
    (r for r in response.output if r.type == "function_call"), None
)

if tool_call:
    input_messages.extend(response.output)  # this is what I mean
    result = use_tool(tool_call)
    input_messages.append({
        "type": "function_call_output",
        "call_id": tool_call.call_id,
        "output": str(result)
    })

    response_2 = client.responses.create(
        model="o3",
        input=input_messages,
        tools=tools
    )

I tried use this method as the document

        for item in response:
            if item.type == 'reasoning':
                history_messages.append({
                    'id': item.id,
                    'type': 'reasoning',
                    'summary': item.summary
                })
            elif item.type == 'function_call':
                history_messages.append({
                    'type': 'function_call',
                    'name': item.name,
                    'call_id': item.call_id,
                    'arguments': item.arguments,
                })

But it doesn’t work till I did like this input_messages.extend(response.output).

Why?

Hi All, I’ve been trying this with gpt-5 and the reasoning output is throwing an error as described when using parallel function call tools while streaming. I am trying to build the new input array but to no avail. Speak to me like I’m 5…. what is the final structure of the input for the second call that includes the function_call_output results?

I basically have this:

[reasoning item,

function_call_item1 ,

function_call_output1,

function_call_item2,

function_call_output2]

I receive the error that function call2 does not have the related reasoning input. Is this a bug? I’ve tried many configurations including the reference_item input. Very frustrating. How can I specify the reasoning item for all the function calls? I cannot add the reasoning item twice as it complains.

same here. I found solutions here helpful!

I found solution!!

When I used GPT-5, I experienced a similar issue.
So I ran many test cases and eventually discovered a pattern in how the Responses API behaves.

I posted it.

Try making good use of the item input.
I hope this information was helpful to you.

I’m unsure if they were helpful because it was not outlined how to add item_reference in the payload or where to put the reasoning items that don’t exist when the error is discussing a function call that is missing a reasoning item that wasn’t provided. You can not add the reasoning item twice. There is only one empty reasoning item that pairs with the first function call but then there are errors on the subsequent function calls in the same loop (ie parrarel calls).

I gave up and used the non-parallel method which is working very well and has many examples. Just wish I didn’t waste a few days trying to figure it out. I believe this is a bug that hasn’t been fixed for reasoning models using parallel tool calls.