Hi there!
I’ve been experimenting with the demo GitHub - openai/openai-realtime-agents: This is a simple demonstration of more advanced, agentic patterns built on top of the Realtime API.
and I was wondering if creating responses outside the default conversation (for analysis of the user input like the example below) has the same effect of extending the system prompt of GPT-4o Realtime, giving too many responsabilities to it and thus degrading its quality.
For example, let’s say I want to analyse n different aspects of the user audio input.
prompt = """
Analyze the conversation so far. If it is related to support, output
"support". If it is related to sales, output "sales".
"""
event = {
"type": "response.create",
"response": {
# Setting to "none" indicates the response is out of band,
# and will not be added to the default conversation
"conversation": "none",
# Set metadata to help identify responses sent back from the model
"metadata": { "topic": "classification" },
# Set any other available response fields
"modalities": [ "text" ],
"instructions": prompt,
},
}
ws.send(json.dumps(event))