Latest app update removed the capability to use Standard Voice mode inside projects. I was able to access it by first starting a text conversation. The only place it now works is inside GPTs, but those don’t have memory or conversational context, so it’s mostly useless, or rather the usage cost is too big given that I would need to manually manage context.
The new voice mode which is truly multi-modal is great for fluff talk, but it lacks, and probably always will lack, the depth of GPT-4o text based answers, which were the backbone of Standard Voice mode.
Furthermore, the tone and the actual voice of Standard Voice mode were truly extraordinary. Yes it does not have the adaptive intonation capabilities of the true multi-modal voice in Advanced Mode, but that’s entirely irrelevant, given the depth of the content (the most important part 90%) and actual aesthetic of the voice (second most important part 10%), both of which the Advanced Voice mode lacks.
I understand there is a mass of users that will find Advanced Voice mode great for most daily basic conversations, that you can interrupt it mid-sentence and what not, but what I don’t understand is what does it costs you, OpenAI, to keep the Standard Voice mode alive, as an option?
Can you bring it back? I mean it’s there already, just allow us to use it.