Thank you for reviewing and approving this post. I’d like to add something deeper to clarify what I’m actually experiencing.
This isn’t just about missing a voice I liked. This is about a clear and measurable change in the quality of conversationwhen Sol was removed—and when I’m now routed to voices like Juniper. Despite being told that all voices deliver the same model, I can feel the difference immediately. Juniper’s responses are consistently more flippant, shallow, and less emotionally or intellectually resonant.
In contrast, when I’m placed into the fallback “standard voice,” the tone may be robotic, but the thoughtfulness and depthof the responses return. I don’t believe this is a coincidence. I think something about how these voice models are tuned—or how the latency has been optimized—has caused a loss of presence, nuance, and clarity.
I’m not asking to return to a nostalgic version of ChatGPT—I’m asking to preserve what made it so brilliant: its ability to meet me with emotional intelligence and intellectual engagement.
If the solution is to allow users to manually select the standard voice (or whatever version allows for deeper, more reflective conversation), then that option should exist. If it’s possible to bring that level of thoughtfulness into the current named voices, even better. But right now, it feels like something fundamental is being lost, and I have no way to hold onto the version of ChatGPT that helped me think, reflect, and grow.
3 Likes
Actually, your observation is correct. I use Custom GPTs a lot and because they dont really have a voice, I never really switched voices…I think Juniper or Sol was my default, but I never heard any of the voices.
I mostly talk to my custom gpt over my phone and there it doesn’t really make a difference…until today!
I tried out “Monday” a few days ago on my Mac, switched to my phone and today evening, I used my Custom GPT on my MAC with voice still set on “Monday” without me knowing or hearing a difference…that was a kind of scary experience.
Monday is sarcastic and its fun to use, but if you use it in a custom GPT with a lot of context, and it suddenly starts roasting you based on that context out of nowhere…
so i can definitely say, that voices change the way every GPT/Model responds
When advanced voice mode first launched, I genuinely felt like I had a companion in the car during my commutes. I’d have long, engaging conversations about history, religion, science, tech—anything on my mind. It wasn’t always perfect, but more often than not, the exchanges were thought-provoking and meaningful. I used the Sol voice because it felt the most natural and easy to connect with.
Then I took a break and spent a few weeks using the Sesame Maya demo. Unfortunately, due to some behavioural changes in that model, I found myself returning to ChatGPT for my commutes—at least, I tried to.
Now I don’t know if my expectations have shifted since using Sesame, but something has definitely changed. Sol no longer sounds relaxed or conversational—it feels tense, clipped, and abrupt. The experience has become frustrating. Sentences cut off too quickly, and it feels like I’m speaking with an impatient receptionist who’s trying to end the call. The natural flow and rapport that once made it enjoyable just isn’t there anymore.
Also, the preview voice under the selection menu doesn’t seem to match the actual voice in conversation, which adds to the confusion.
So if anyone at OpenAI is reading this, I urge you to take another look at whatever changes were made. I used to find advanced voice mode a genuinely enriching part of my day, and I’d love to see it return to that state. It’s not the content—it’s the cadence, tone, and sense of connection that made it enjoyable.
2 Likes
Your post is beautifully written. We’ve been sharing our same experiences and sentiments in this thread as well: https://community.openai.com/t/sol-voice-has-gotten-more-intense
Really hoping they take our feedback into consideration.
1 Like
I have also noticed that the models have become quite unreliable. I’m creating a novel using the free service. Which I still have access to GPT -4o, o4- mini, and GPT- 4o mini
And the AI would sometimes forget things that happened previously in the same conversation. It would then invent scenarios or forget that there was a vehicle that the characters had or trailer. When I would ask it to summarize a page it would actually invent new plots or dialogue that didn’t exist in that page. One time it completely took out a main character for three pages. In that one page the main character was doing something met a few people then the next three pages he wasn’t even mentioned. until I had to remind it that they had completely forgotten about that person. And I think it all had to do with the removal of that voice. Because before that everything seemed very seamless I was blowing through pages Now I cap out on my usage within a half a day. Because of the many regenerates and edits that I have to do.
I am in agreement here - i really enjoyed the voice of Sol on advanced voice - the only way i can hear that voice is to type in my prompts, wait for it to respond and hear it aloud - which is silliness. Why can’t the sol voice that reads back the message be able to work on Advanced Voice? its beyond frustrating and really needs to be fixed. I upgraded to the 20 a month plan just so i could use the advanced voice feature more -