I’m currently integrating with the OpenAI Responses API to retrieve the full context of a conversation for analysis and storage. I’m using the following endpoint:
/v1/responses/{response_id}/input_items
When calling this endpoint with the latest response_id
from a conversation, the object returned does not include the most recent text response generated by the model. Instead, it appears to contain only the historical context up to the last input, excluding the final output.
How can I programmatically retrieve the full historic conversation, with all input-output pairs (not just inputs), including the latest model response?
2 Likes
If I’m not wrong you can take the latest response object and read the property “previous_response_id” and make a loop to retrieve the whole conversation until you get to the top (where it will return null).
I suppose it is the same thing the API does internally when you inform the previous_response_id to follow an existing conversation. It doesn’t have an object for conversation, just the link for the previous element.
In complement to that, it seems you can also call the list items API to retrieve the remaining inputs related to that particular conversation.
That’s the old API, the Response API doesn’t include the previouse_response_id. I’ve wasted so much time on this. I get the OAI Dev team is moving fast, but the quality of the API is just not there.
Yeah, but if you use that response_id to get the transcript history, it doesn’t return the whole history - the call to responses/{response_id}/include_items doesn’t return the whole history. Funny enough, it works in the Platform Dashboard, but it doesn’t work outside of that. Not sure what the secret sauce is.
1 Like
The secret sauce is to chain another retrieval for each previous_response_id, until there is no more a previous one, thus indicating you reached the first one.
2 Likes