Somehow I would expect custom GPTs to support voice mode but when I open a GPT I created and click the voice button and speak, then check which GPT it is, it’s invariably ChatGPT 4o and not the custom GPT I had opened.
To me this seem like a big fail on the part of OpenAI but maybe I’m missing something?
This can be explained by the extreme cost of gpt-4o voice (at least to developers) and the limited interaction time you get per day in advanced voice mode in the ChatGPT app.
GPTs (of the millions floating around uncurated) are also not oriented towards voice. You can’t have advanced voice mode in the app open a canvas sidebar and put code snippets there nor show you a dalle picture, nor can you click the links of the list of flights to your destination nor have a GPT process your uploaded documents.
You can use custom instructions to make your own giggling goofus voice persona.
I tested the Standard Voice Mode in a custom GPT across different platforms: iPhone app, web browser on Windows, and the Windows app.
Here are the results:
iPhone App: Only the Shimmer voice is available.
Web Browser and Windows app: All nine voices are accessible:
Ember, Spruce, Cove, Arbor, Juniper, Sol, Maple, Breeze, and Vale.
It might be time for OpenAI to update the information to reflect this accurately:
Can I have voice conversations with GPTs?
Yes, voice conversations are available with GPTs. But, the availability of voices depends on the platform you are using.
Standard Voice Mode with GPTs
Mobile Apps: GPTs use a single voice option named Shimmer. Other voice options are not available in mobile apps.
Web Browser and Windows / macOS Apps: You can choose from nine voices when having voice conversations, and Shimmer is not available here:
Ember
Spruce
Cove
Arbor
Juniper
Sol
Maple
Breeze
Vale
Advanced Voice Mode with GPTs
Advanced voice conversations are NOT yet AVAILABLE with GPTs. If you attempt to use this feature, you will be redirected to start a standard voice chat instead.