Using previous_response_id fails when swapping from reasoning -> non-reasoning models

previous_response_id works great however I encounter this error when swapping from reasoning → non-reasoning models, presumably a common use-case.

I can’t seem to find a way to list and clean out the reasoning steps in this scenario because reasoning isn’t found in client.responses.input_items.list(). The only way I can think of is to manually handle the conversation like in Chat Completions but then that removes all of the benefits of the previous_response_id.

Ideally the backend would be smart enough to handle this and remove reasoning inputs when a non-reasoning model is selected. Alternatively having a function to clear these out on our end would help in the meantime.

I’m using Azure OpenAI.

OpenAI Version: 1.79.0
Azure API Version: 2025-04-01-preview

To Reproduce

response1 = await llm.responses.create(
    input="what is a recursive python function?",
    instructions="formatting re-enabled",
    model="o4-mini",
    reasoning={"effort": "medium", "summary": "detailed"},
)
print(response1)
response2 = await llm.responses.create(
    input="hi",
    previous_response_id=response1.id,
    model="gpt-4.1",
)
print(response2)
---------------------------------------------------------------------------
BadRequestError                           Traceback (most recent call last)
Cell In[44], line 8
      1 response1 = await llm.responses.create(
      2     input="what is a recursive python function?",
      3     instructions="formatting re-enabled",
      4     model="o4-mini",
      5     reasoning={"effort": "medium", "summary": "detailed"},
      6 )
      7 print(response1)
----> 8 response2 = await llm.responses.create(
      9     input="hi",
     10     previous_response_id=response1.id,
     11     model="gpt-4.1",
     12 )
     13 print(response2)

File c:\Users\user\repo\.venv\Lib\site-packages\openai\resources\responses\responses.py:1559, in AsyncResponses.create(self, input, model, include, instructions, max_output_tokens, metadata, parallel_tool_calls, previous_response_id, reasoning, service_tier, store, stream, temperature, text, tool_choice, tools, top_p, truncation, user, extra_headers, extra_query, extra_body, timeout)
   1529 @required_args(["input", "model"], ["input", "model", "stream"])
   1530 async def create(
   1531     self,
   (...)   1557     timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
   1558 ) -> Response | AsyncStream[ResponseStreamEvent]:
-> 1559     return await self._post(
   1560         "/responses",
...
-> 1549         raise self._make_status_error_from_response(err.response) from None
   1551     break
   1553 assert response is not None, "could not resolve response (should never happen)"

BadRequestError: Error code: 400 - {'error': {'message': 'Reasoning input items can only be provided to a reasoning or computer use model. Remove reasoning items from your input and try again.', 'type': 'invalid_request_error', 'param': 'input', 'code': None}}

Windows 11
Python v3.13.2
openai v1.79.0

I was told to post this here, see

2 Likes

As the API informed you, reasoning input items are not compatible with non reasoning models.

They are most likely encripted, as you can see here in the docs about using reasoning on stateless mode.

Not much do do against a model limitation. You can see in playground it also confirms that is not possible:

I would say that the best you can do is to start a “new” conversation with a non reasoning model, taking off the incompatible inputs.

It would be an interesting feature to be able to do this natively, but atm it is not supposed to do that.

Another alternative is to disable summary, if you do so, it is able to continue in a normal conversation with another model.
Edit: still, it won’t allow previous_response_id even without summary. The conversation input needs to be recreated.

2 Likes

I’m hoping @OpenAI_Support can handle this server side like they do in ChatGPT, or like how both “system” and “developer” messages just work regardless of if it’s a reasoning model or not.

1 Like

Can I confirm if swopping between reasoning models is possible on the API? From my initial tests, when I swop to a different reasoning model I get: “Reasoning items can only be provided as input to the same model that generated them.”. Does this mean I can’t continue the thread between reasoning models.. or I have I missed something?