I’ve encountered an issue with the Responses API when utilizing the previous_response_id
parameter to maintain conversation context. Specifically, after including the previous_response_id
in my requests, the model occasionally responds to earlier parts of the conversation rather than addressing the most recent input. This leads to responses that seem out of context or unrelated to the latest user query.
Steps to Reproduce:
- Initiate a conversation using the Responses API.
- For each subsequent request, include the
previous_response_id
to maintain context. - Observe that, in some cases, the model’s response pertains to an earlier part of the conversation instead of the latest input.
Observed Behavior:
The model generates responses that are relevant to previous messages in the conversation history but not to the most recent user input. This behavior disrupts the flow of the conversation and affects the user experience.
Expected Behavior:
When using the previous_response_id
to maintain context, the model should generate responses that are directly relevant to the latest user input, effectively utilizing the conversation history to inform its replies.
Additional Information:
I have reviewed the OpenAI API Reference for similar issues but have not found a resolution. This unexpected behavior appears to be a bug in the Responses API’s handling of the previous_response_id
parameter.
I would appreciate any insights or guidance on resolving this issue. Has anyone else experienced similar behavior when using previous_response_id
with the Responses API?