How to properly retrieve context in an OpenAI Realtime conversation using response.create

I’m trying to generate an out-of-band response in Realtime OpenAI using response.create without appending to the default conversation. I want to summarize the conversation and output the client’s sentiment, but every time I send:

prompt = “”"
Analyze the conversation so far. Provide a 20-word summary and classify the client’s sentiment into: neutral, negative, or positive.
Format:
summary: {summary}
sentiment: {sentiment}
“”"

event = {
“type”: “response.create”,
“response”: {
“conversation”: “none”,
“metadata”: {“topic”: “sentiment_update”},
“output_modalities”: [“text”],
“instructions”: prompt,
},
}

ws.send(json.dumps(event))

I always get the next assistant next message instead of the expected summary/sentiment. I have tried more than 10 times.

This is an example of my response:



{
“type”: “response.done”,
“event_id”: “event_XXXXX”,
“response”: {
“object”: “realtime.response”,
“id”: “resp_YYYYY”,
“status”: “completed”,
“status_details”: null,
“output”: [
{
“id”: “item_ZZZZZ”,
“type”: “message”,
“status”: “completed”,
“role”: “assistant”,
“content”: [
{
“type”: “output_text”,
“text”: “I’d like to know if you placed an order within the last 3 months.”
....
“metadata”: {
“topic”: “sentiment_update”
}
}

Which was exaclty what the assitant turn was saying. How can someone retrive information like this? without having to store the history on the data on disk on our code?

I suspect the model isn’t accessing the conversation history properly. How can I correctly reference prior messages to produce a proper out-of-band summary using response.create?

Documentation reference: https://platform.openai.com/docs/guides/realtime-conversations#create-responses-outside-the-default-conversation

If you pass a conversation ID, that implies that you are in a chatbot where the user is conducting another conversation turn. The input and output will be appended.

I’ve noted a silent failure mode: if you set “store”:“false”, simply because you do not want persisting response IDs, then the conversation ID also will not be updated with the new turn. As long as that undocumented behavior continues, you could use that technique to prompt a non-consequential turn.

You’ll instead likely want to retrieve all the messages out of a conversation through the API method for doing so, and NOT ask about them as individual role messages that appear to be a conversation of the same user. Rewrite messages as a single block of plain text, stripping to just user string input and assistant content output, enclosed in a strong container, and then design a developer message and prompt around obtaining the information and analysis you want.

Thank you very much for your detailed response. If I understand correctly, there is currently no built-in way to generate a summary of a conversation using response.create without providing the conversation context, because the model does not automatically have access to prior messages.

In practice, I am collecting both the customer transcript and the AI agent transcript. While the AI agent transcript is accurate and faithfully represents what the model said, the customer transcript—despite using GPT for transcription—is often unreliable.

I had hoped that leveraging response.create as described in the documentation would allow me to summarize the session directly, but it seems that without explicitly providing the context, this is not possible.

Is there a way to reference a full conversation ID—or the items of a conversation—so that the model could generate a summary or extract insights from the entire session? Or is the recommended approach to always provide the conversation content manually when creating out-of-band responses?