400 Error when chaining sessions between 4.1 and o4-mini

Description:

When making successive calls to the OpenAI API (v1/responses), the initial call to model gpt-4.1 and the intermediate call to model o4-mini (the reasoning model) both succeed. However, the third call—returning to gpt-4.1 using the “previousResponseId” from o4-mini—always fails with this error:

400 Bad Request
Reasoning input items can only be provided to a reasoning or computer use model. Remove reasoning items from your input and try again.

Environment:

  • API endpoint: POST https://api.openai.com/v1/responses
  • Models tested: gpt-4.1, o4-mini
  • Response storage: enabled (store=true)
  • o4-mini parameters:
    • Effort: High
    • Summary: Detailed

Reproduction steps:

  1. Request 1: Call gpt-4.1 → receive a response, record its ID (response_id_1).
  2. Request 2: Call o4-mini with the same messages, adding previousResponseId=response_id_1 → receive a response, record its ID (response_id_2).
  3. Request 3: Call gpt-4.1 again, adding previousResponseId=response_id_2 → receive a 400 error (see above).

Expected result:
The third request should succeed and continue the conversation thread on gpt-4.1.

Observed result:
The call fails with the error indicating that “reasoning input items” aren’t allowed on a non-reasoning model.

Explanation:
As soon as you chain a reasoning-model call → non-reasoning-model call using previousResponseId, all input items—including those labeled type: "reasoning"—get passed back into gpt-4.1, triggering:
Reasoning input items can only be provided to a reasoning or computer use model

Desired solution:
Please don’t suggest simply flipping a “reasoning off” switch—what’s a proper fix?

1 Like