Hello,
I am creating simple mobile app using Voice to Voice realtime OpenAI websocket api.
Where I want user to speak and App will respond using above websocket APIs.
Now my issue is, When i pass audio chunk (from user) to input_audio_buffer.append() as a “Data” object it gives me error as below :
{
error = {
code = ““;
“event_id” = ““;
message = “The server had an error while processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the session ID sess_AH6XZAZ6QMtPBC1BXyqQA in your message.)“;
param = ““;
type = “server_error”;
};
“event_id” = “event_AH6Xg6xGmb8WfxcqsEqaK”;
type = error;
}
==================
Another question : When i pass input user Audio in Base64, I get response in Text format transcript. I want output is voice directly from GPT. ( I dont want to user TTS)
Does anyone facing same issues ?
If there please give me solution of it, Also if anyone has any such app demo then also please send.