Issues with gpt-realtime today?

Hi @OpenAI_Support,

Is everything ok with gpt-realtime today? The status page shows “fully operational,” but we’ve encountered two issues that we hadn’t experienced before:

1) One response hit the 4,096 output token limit, even though the actual message was extremely short.

{
    "type" => "response.done",
    "event_id" => "event_DBMwLa7q1Tx9W7LAr0TOH",
    "response" => {
        "object" => "realtime.response",
        "id" => "resp_DBMvjn5wXtvEQrKZn1a1m",
        "status" => "incomplete",
        "status_details" => {
            "type" => "incomplete",
            "reason" => "max_output_tokens"
        },
        "output" => [
            {
                "id" => "item_DBMvj40VMuta3jTJ57rj1",
                "type" => "message",
                "status" => "incomplete",
                "role" => "assistant",
                "content" => [
                    {
                        "type" => "output_audio",
                        "transcript" => "D'acord, un segon."
                    }
                ]
            }
        ],
        "conversation_id" => "conv_DBMvMSYRwVE115QeCsDhK",
        "output_modalities" => [
            "audio"
        ],
        "max_output_tokens" => "inf",
        "audio" => {
            "output" => {
                "format" => {
                    "type" => "audio/pcm",
                    "rate" => 24000
                },
                "voice" => "cedar"
            }
        },
        "usage" => {
            "total_tokens" => 7754,
            "input_tokens" => 3658,
            "output_tokens" => 4096,
            "input_token_details" => {
                "text_tokens" => 3487,
                "audio_tokens" => 171,
                "image_tokens" => 0,
                "cached_tokens" => 2624,
                "cached_tokens_details" => {
                    "text_tokens" => 2624,
                    "audio_tokens" => 0,
                    "image_tokens" => 0
                }
            },
            "output_token_details" => {
                "text_tokens" => 13,
                "audio_tokens" => 4083
            }
        },
        "metadata" => nil
    }
}

As you can see, the message only included: “D’acord, un segon.” Hitting 4,096 tokens is completely unreasonable given how short the message was, and it broke the flow of the conversation.

2) Another response took 1 minute and 3 seconds before failing. It generated response.create and nothing else, until we finally received response.done with a server error.

{
    "type" => "response.done",
    "event_id" => "event_DBJOzrwzd28f7bZrkvxsX",
    "response" => {
        "object" => "realtime.response",
        "id" => "resp_DBJNzt2rkrCTgclawGbBh",
        "status" => "failed",
        "status_details" => {
            "type" => "failed",
            "error" => {
                "type" => "server_error",
                "code" => nil,
                "message" => "The server had an error while processing your request. Sorry about that! Please contact us through our help center at help.openai.com if the error persists. (include session ID in your message: sess_DBJNWrwcsO6Npur1IwZ2S). We recommend you retry your request."
            }
        },
        "output" => [],
        "conversation_id" => "conv_DBJNWexIn4fhYLchauOGr",
        "output_modalities" => [
            "audio"
        ],
        "max_output_tokens" => "inf",
        "audio" => {
            "output" => {
                "format" => {
                    "type" => "audio/pcm",
                    "rate" => 24000
                },
                "voice" => "cedar"
            }
        },
        "usage" => {
            "total_tokens" => 4014,
            "input_tokens" => 4014,
            "output_tokens" => 0,
            "input_token_details" => {
                "text_tokens" => 3680,
                "audio_tokens" => 334,
                "image_tokens" => 0,
                "cached_tokens" => 0,
                "cached_tokens_details" => {
                    "text_tokens" => 0,
                    "audio_tokens" => 0,
                    "image_tokens" => 0
                }
            },
            "output_token_details" => {
                "text_tokens" => 0,
                "audio_tokens" => 0
            }
        },
        "metadata" => nil
    }
}

Meanwhile, the following responses were failing with “conversation_already_has_active_response”, which makes sense given this previous error.

Could someone please check on this?

Thank you :folded_hands:

1 Like