[Realtime API] Server response error message: "Conversation already has an active response"

When testing the Realtime API function calls, I sent “function_call_output” with “conversation.item.create” message after each function call. And then I sent “response.create” message to trigger the server response to the function output.
After a couple of function calls, the server responded with error message: {“type”:“error”,“event_id”:“event_AI7XljfmFSCOULeaTDPAF”,“error”:{“type”:“invalid_request_error”,“code”:null,“message”:“Conversation already has an active response”,“param”:null,“event_id”:null}}

But from the log, all the previous responses had received “response.done” messages from the server. That means there were no incomplete responses.

Anyone experiencing the same problem?

2 Likes

Yes I am experiencing the same problem!

I give the voice agent the option to call a random function. Inside this function, at the end of some logic, there is a session.response.create() invocation.
This works perfect especially in the beginning of the conversation. However, as the conversation grows longer, the error of “Conversation already has an active response” keeps reappearing.
For now I just catch the error which results in radio silence until the user sends out another response (which invokes another response from the agent). The further in the conversation, the more often this happens.
Any suggestions to solve are welcome. Maybe someone tried timeouts?

2 Likes

Seeing also the same issue occasionally after few function calls.

Took blood, sweat and tears for me to figure it out - because the process flows arent perfectly documented. :worried:

Just try imagine for yourself how you would design a realtime interface.

The API cant handle multiple requests at a time and “response.create” just means “do something for me”. So you need to wait in your event loop until you receive any “response.done” and then you are able to create new responses. One at a time.

Be careful. When you are in VAD-Mode the model automatically creates responses.

Unfortunately the API currently does not have any possibility of retrieving internal status for responses, conversations, item …

Hey Cas! Glad to hear you are using the same logic as I am; to send a session.response.create event after receiving an optional tool call, however this has also been unreliable for me. I’ve been trying with timeouts but to no avail, it’s also unreliable :thinking:

1 Like

Hit the same issue especially during long lasting sessions. I added some logic to keep track of the active response and clear it after receiving the response.done event. Now if i receive this error, i check if there is an outstanding response, if not, just send a response.create to have the server side regenerate the failed response. Seems to be working so far.

@dkupeli Glad to hear your method is working. Could you share more details about your method? Thank you.
For example, “to keep track of the active response and clear it after receiving the response.done event”, does it mean you send “response.cancel” to server after you receiving the response.done event?
And how do you check if there is an outstanding response?
I don’t quite understand the meaning “just send a response.create to have the server side regenerate the failed response”. The server error “Conversation already has an active response” is just due to we have sent a response.create before, while the server internally has incorrect state that it still has an active response. But now you send another response.create to server?

I made the small change mentioned in PR #53 and it fixed the issues for me at least. YMMV. :crossed_fingers:

(i can’t post links :confused: )
github .com/openai/openai-realtime-api-beta/pull/53

Same problem… any update?

I tried to generate a response after this error comes from the api so that there is no silence and it worked for me at first… the problem is that then if the user continues talking, any response.create that is sent fails. It seems to be a status error on the openAI side where the last response remains open and never closes.

As the conversation flows, i keep track of the active response id. for the session, clear it when it receives a response.done. so if and when this error happens, i check if there is an outstanding response if not just send back a response create. It feels like the response state is not being cleared on the openai side sometimes. Here is what the code snippet looks like

if (response.type === 'response.audio.delta') {
  if (response.response_id) {
      setActiveResponse(response.response_id);
  }
}
////
if (response.type === 'response.done') {
    clearActiveResponse(response.response.id);
}
////
if (response.type == 'error' && response.error.message == 'Conversation already has an active response' ) {
                            
  const activeResponse = getActiveResponse();
  if (!activeResponse) {
        openAiWs.send(JSON.stringify({ type: 'response.create' })); 
}
2 Likes

I’m thinking it probably has to do with this: https://platform.openai.com/docs/guides/realtime/realtime-api-beta#handling-long-conversations. In my case I use several tools and the responses from those tools are json with a lot of information.

1 Like

I’m also running into this. It occurs after the agent makes a few function calls. It occurs sooner in the conversation if the function calls return large JSON payloads.

1 Like

Yes, it was definitely that. When the conversation progresses and many calls are made to tools where the response is a huge json, this accumulates tokens that after a certain number, the openAI api begins to fail. The solution that worked for me was to delete messages from the history as the conversation progresses.

2 Likes
while any(response.status == "in_progress" for response in session._pending_responses.values()):
    print("Waiting for previous responses to finalize...")

session.response.create()

This should solve the errors on the client side. For the server sides errors, which happen infrequently, we will have to wait for additional development from openai I guess.