When in OpenAI playground, The transcription for realtime API works fine with whisper-1 and gpt-4o-mini-transcribe models. But If I use the same in Javascript code using WebRTC, the input_audio_transcription for use input fails. Any help?
When in OpenAI playground, The transcription for realtime API works fine with whisper-1 and gpt-4o-mini-transcribe models. But If I use the same in Javascript code using WebRTC, the input_audio_transcription for use input fails. Any help?