Access to GPT-4o's Auido Capability

In our team, we have been working on conversational AI, and your new announcements regarding the GPT-4 Omni models audio capabilities have made us impatient to see its integration into our works. Therefore, we would like to utilize GPT-4o’s to circumvent the current 3 layered compulsory architecture and are looking forward to trying the API with GPT-4o auido capability. Is early access available? If so how can one go about getting that? if not then when is the release of the audio feature planned? OR Is there anyone who can assist this issue in another way?


My GPT-4o literally doesn’t do a single thing they promote in the video. It just seems kinda faster but even then it’s overwhelmed when you try to go to voice and breaks.