Code_interpreter, reasoning and previous_response_id bug? (Item ‘...’ of type ‘reasoning’ was provided without its required following item.)

happens both with o3 and o4-mini

  • first call is ok .. it returns a list of reasoning items all with same id,say ‘rs_ABCD’
  • second call .. same to the first call BUT i provide previous_response_id (and a different question of course )..
    And i get
    “error”: {
    “message”: “Item ‘rs_ABCD’ of type ‘reasoning’ was provided without its required following item.”,
    “type”: “invalid_request_error”,
    “param”: “input”,
    “code”: null
    }
    }

did not find any help googling .. can someone suggest anything to solve ?
thanks
enrico

P.S. : same flow work with gpt4.1 (that has no reasoning items)

as workaround . it looks like that it works if one omits the previous_response_id and passes “explicitly” all previous question items as item reference
“input”: [
..
{
“id”: “x1”,
“type”: “item_reference”
},
{
“id”: “x2”,
“type”: “item_reference”
},

– new question
]

still , i would prefer to store a single id (previous_response_id ) on my side than the full list

well, nope, it does not always work :frowning: .. sometime it does, some time same error

1 Like

+1. This seems to happen consistently when files are attached to the container. Here is a minimal example. Let me know if I’m doing something wrong!

import io

from openai import OpenAI


client = OpenAI()

# Create sample file
csv_content = """X,Y
-1,-0.84
-0.8,-0.72
-0.6,-0.56
-0.4,-0.39
-0.2,-0.2
0,0
0.2,0.2
0.4,0.39
0.6,0.56
0.8,0.72
"""

csv_file = io.BytesIO(csv_content.encode("utf-8"))


# Upload file
file = client.files.create(file=csv_file, purpose="user_data")
file_id = file.id
print(f"File ID: {file_id}")


# Generate first response
model = "o4-mini"
reasoning = {"effort": "medium", "summary": "auto"}

response_1 = client.responses.create(
    model=model,
    reasoning=reasoning,
    tools=[
        {
            "type": "code_interpreter",
            # Create a new container
            "container": {"type": "auto", "file_ids": [file_id]},
        }
    ],
    input="Please plot the sample data and tell me what you see.",
)
print("\n------\n")
print(response_1)


# Follow up raises BadRequestError
response_2 = client.responses.create(
    model=model,
    reasoning=reasoning,
    input="Thanks!",
    previous_response_id=response_1.id,
)
print("\n------\n")
print(response_2)

We are experiencing the same issue. In our case, it can be reproduced when there is a reasoning step together with the CodeInterpreter tool. We receive the error when the agent does a hand-off to another agent with different effort setting.

I found solution!!

Do not input reasoning_id when you use tool as “code_interpreter” .

When I used GPT-5, I experienced a similar issue.
So I ran many test cases and eventually discovered a pattern in how the Responses API behaves.

I posted this solution.

Try making good use of the item input.
I hope this information was helpful to you.