On both ChatGPT on iOS and OSX Mac Desktop only Advanced Voice is working on a fresh 4o chat. However on my custom GPTs I have created when I activate voice and speak to it, it will listen and take note of what I said but it will not read out loud back its response back to me. When I exit out of voice it correctly responds in chat. This started happening when the advanced voice was rolled out today.
Same problem here, exactly as you described.
Same problem to me as well.
Yup same. It seems the new advanced voice features don’t work with any kind of custom GPT. If you copy and paste your custom GPT instructions into a fresh chat as a prompt, the voice feature on the mobile app will work.
Facing the same issue and I have a major demo planned for a custom GPT on Monday. This is really frustrating, especially if I’m unable to showcase the voice mode as part of the demo.
To clarify - “standard voice” feature is not working properly on my own custom GPTs with uploaded files in knowledge bank. The only feature that works is Advanced Voice on a fresh vanila 4o chat no documents stored in knowledge bank. Simply copying over the prompt to a new GPT 4o and even disregarding the pdf files in the knowledge bank won’t solve issue. Both iphone app and desktop app for Mac will listen to my word and transcribe it in chat - the chat will respond, but the app will not read outloud its response which destroys the conversational aspect of the feature.
Perhaps there needs to be patches from devs to ensure read out loud is working properly on all Standard Voice GPTs
Same issues to me. This badly affects my daily usages.
I had really hoped there would be an update or at least a workaround for this issue by now. Yet, there seems to be no word on the dev blog. Has anyone heard anything yet?
I had a meeting with a potential developer and my demo failed. Was very embarassing. How can such a giant of a company roll out a global upgrade without realising it was incompatible with custom GPTs?
Is this issue going to be resolved anytime soon???
TBH the upgrade is a cool gimmick but only really useful if youre doing VO. The fact that I can’t use voice with my Custom GPT is not good at all.
Same problem all custom gpts voices down but transcribes.
Yep, I’m also experienccing this and quite frustrating. For me, there was a window of a couple of days where custom gpt standard voice was working fine, alongside advanced for general chats. But from about Thursday (aest) onwards, voice in custom gpts stopped working for me.
I’m having exactly the same issues starting from the 26th September 2024. It doesn’t affect me professionally, but still…Open AI it’s time for a solution to this problem. Thank you!
Same issue here. Have a big presentation Wednesday precisely focused on a conversation with a customGPT. OpenAI fumbled this deploy. Please fix ASAP.
I cannot believe that OpenAI didn’t test to ensure that Custom GPT would continue to have voice capabilities. It’s not just the Custom GPTs. Whenever you return to any previous conversation the voice feature tells you that the advanced voice feature only works with NEW chats!! That’s ridiculous and is crashing my business!!
I think we are back! Go check everyone
Yep, my custom GPT is working with voice. It appears to be using the standard voice mode
Yaaay… just in time for my demo today. Thank you!
Is there a way to use advanced voice mode with our created GPTs? Was it intended for us to do that, or are we only able to use the standard voice mode?