I’m attaching an image to illustrate why advanced voice is not as helpful as it seems—it’s not using GPT-4o, but rather GPT-4o Mini.
The image is being repurposed to show other relevant information, but the main point is this:
For any AI, if you message advanced voice a lot or send numerous messages, you’ll eventually be speaking to GPT-4o Mini instead of full GPT-4o. This means that even if you wanted a hands-free assistant, you’d likely end up with the restricted Mini model.
I previously called OpenAI out for doing the same thing with the O1 models, and they fixed it there. However, they did not apply the same fix for GPT-4o and custom GPTs.
One of the easiest ways to tell if you’re getting GPT-4o Mini is by checking explicit content restrictions—Mini has much harsher filters. Originally, advanced voice allowed some explicit content, but OpenAI later trained it to respond differently and restrict more topics.
Recently, OpenAI has also patched numerous workarounds that allowed users to access the other voice models. Instead of just censoring words, they have completely blocked access to alternate voices, leaving only the default one I call “Karen.”
Additionally, if a chat has an orange bubble, you can’t listen to it in advanced voice mode. However, you can still listen to every chat in standard playback.