GPTs new voice VS GPT4 standard voices

Is there a way to change the type of voice used by a custom GPTS?

I’ve noticed that when having voice conversations with a newly created customized GPTS (via app), the voice tends to be a bit more monotonous compared to the voices used in standard GPT4 voice conversations.

Changing the voice in the options only seems to affect the standard GPT4 and not the GPTs. I like the voice like “Juniper” as it seems much more expressive compared to the new voice used in GPTs. :thinking:

…if I select and continue an archived conversation with the GPTS, it reverts to a voice similar to the standard GPT-4, which is much more expressive.


I totally agree with you. I don’t like the voice for the new GPTs. In addition to being totally unexpressive, it’s also not loud enough, and it sounds really condescending. Plus, it emphasizes weird words, which sometimes makes it sound angry or annoyed.

1 Like

Thank you for this post and the comment. I just spent an hour logging out and back in, creating a new GPT, and searching around to try and figure out why it wasn’t using my selected voice. I didn’t try exiting and reentering the conversation, and that does indeed switch it back to my selected voice. This has to be a bug. It seems that custom GPTs are stuck on using the Ember voice when first created, rather than using the user’s chosen voice. Can this post be re-tagged as a bug?

1 Like

Glad I found this thread here. The new feature is barely unusable for me without the voice of Sky. I hope this standard voice will be changed in the future

I too am team Juniper!! Unfortunately for me, after I enter back into the thread to activate Juniper on mobile, it gives me an error everytime. It worked once, but since then it’s been difficult to get a custom GPT started on desktop (since they use apis), then go to mobile app where Juniper is and start it back up.

I finally decided to make a small and simple app for desktop use. I made it to connect GPT’s API with 11Labs’ API, using a customized voice, actually tried also using my own voice :laughing: I’m having a lot of fun with it. :upside_down_face:

1 Like

Please share it! I’m curious to try it. Did you implement STT as well?

1 Like

I’m encountering another issue with the default voice setting in my ChatGPT iOS app. Normally, it’s set to ‘Cove,’ but after developing a custom GPT, the voice has changed to a female voice. I’ve attempted to force the voice in the instructions, but it hasn’t resolved the problem. Could this be a bug?"

I have always used 11Labs, which is a fantastic platform, at least for the way I use it. So, I had this idea to leverage both technologies, OpenAI’s and 11Labs’. Their respective APIs are necessary. The process begins with recording from the microphone, the audio is converted to text by OpenAI’s Whisper, and the prompt is given to GPT-4, awaiting a response from the many standard and multilingual voices of 11Labs or, in my case, from a customized cloned voice (yes, I’ve even tried conversing with myself… ah, the ego :grin:). It’s important to have the rights to the voice you intend to clone and use as in this case. Currently, I’m using it only on my work computer, partly to test it, update it when necessary, and add some features (which it doesn’t have at the moment). I like things beautiful but minimal, so two input texts to enter the respective APIs, and a CONNECT button are more than enough for me. It’s still a bit raw. But if you search and read, you’ll see that it’s not too complex to create your own. The only drawback is to always check the remaining character quota of 11Labs because it can quickly run out if you get lost in a chat :sweat_smile: moreover, I’ve noticed that 11Labs now also has a TTS Turbo version, which drastically reduces latency. I haven’t tried implementing it yet, but I think I will. However, with the standard/custom voices version, I don’t notice much latency.