It appears advanced voice mode is not available in custom GPTs. This is hugely limiting custom solutions built for handsfree interactions using custom GPTs that leverage function calling. Is there a timeline or ETA for when Advanced Voice Mode will be available in custom GPTs?
Hi @rboade ![]()
Currently, Advanced Voice Mode is not available for custom GPTs, and OpenAI hasn’t yet shared a specific timeline for when or if this feature might be introduced.
I hope, OpenAI is aware of these issues as they’re frequently discussed by community members, and it may influence future updates, but no confirmed development plans have been provided.
For the latest updates, you might want to keep an eye on the Voice mode FAQ
+1 The fact that custom GPTs don’t have voice or the latest models is a huge reason not to use them. Makes them very lackluster for anyone who uses chatGPT a lot otherwise, since you have to give up some of the best aspects of your service.
Still waiting
Any idea when it will be available?
Yes still waiting, I want to use advanced voice mode in my custom GPT too – as the old voice mode is far too slow and you can’t interrupt it. Custom GPTs allow users to tailor entire entities and personality structures that or more task specific, so one would also need a more natural voice interface to utilise this capability.
I’d love this to appear as well.
Yep! really would like that too, advanced voice mode is really impressive and it is hard to go back to the basic mode.
Literally any update for the Custom GPTs, so they’re actually useful, would be great. Getting the new image generation was nice, but that’s nothing compared to:
Model switching
AVM
Canvas
Fixed actions
Web search
Ability to @ tag with models besides 4o
I’m waiting for this too. It would be great, especially when you have actions linked.
Right now, if a Custom GPT with Actions asks for permission, it forces you to leave voice mode to tap the confirm button.
But once you do that and return to voice mode, it asks you again—making it unusable hands-free.
So at the moment, it’s just not possible to use voice mode effectively with Custom GPTs that have Actions enabled.
Really hoping this gets addressed soon!
Please @OpenAI_Support allow Advanced Voice Mode in Custom GPTs
At least since yesterday it works to switch the voice mode AND then, in a Custom GPT, when tapping or clicking “Read Aloud”, it reads it finally in the selected voice.
Lucky you. My voice still won’t switch to the one I selected. It always forces me to an advanced chat in regular gpt mode.
Maybe tomorrow. Hopefully. When chatgpt 5 is supposed to be announced at 10 am PT.
Advanced voice mode is now available in my custom gpt, but now it appears that function calling doesn’t work. Has anyone else run into this. Is it a known issue?
Advanced voice mode is available in Custom GPTs since a while. BUT, not fully immersive as in the old one… .There’s a trick to close that gap that I’ve found. BUT, I’d be way happier if OpenAI would close it.
Can you please share the workaround. We are using the voice mode in a custom GPT, it listens correctly but it does not give the correct info back from the knowledge data that we have provided in CSV.
@OpenAI_Support Since GPT-5, Voice Mode no longer works for all my GPTs.
- It skip all the actions declared, say it will call them but never did it
- It works with text mode
It is a huge bug for GPTs Actions with Voice I don’t understand nobodies see it even if only 3% of GPTs use awesome actions.
Still an Issue. When in Voice Mode, the actions are not executed. Switching to text mode and just typing “?” makes it work tho. Still not a great UX.
There is a new thread discussing the limitations of advanced voice in custom GPTs that can be found here. Function Calling Not Working In Advanced Voice Mode