OpenAI’s Voice-to-Text and Text-to-Voice Features Are Unreliable and Frustrating

OpenAI’s Voice-to-Text and Text-to-Voice Features Are Unreliable and Frustrating

I’ve been using OpenAI’s voice-to-text and text-to-voice features for a while now, and it’s honestly shocking how broken they still are. These issues have been present since the beginning, and it’s unacceptable that they haven’t been addressed.

Voice-to-Text (Speech Recognition) Issues:
1. Random failures and delays – Sometimes it takes way longer than expected to process speech, and other times it just fails completely.
2. Data loss – You can speak for a while, expecting it to transcribe, only for it to suddenly fail, losing everything you just said.
3. Hard-coded AI reference rewriting – If any reference to “AI” or “ChatGPT” is made, the whole transcription is altered in a way that misrepresents what was actually said. This feels like an unnecessary and intrusive manipulation of user input.
4. Inconsistent behavior – No pattern to when it works or fails, making it impossible to rely on.

Text-to-Voice (Read-Aloud) Issues:
1. Looping bug – The reader frequently starts reading a passage, reaches a certain point, then randomly jumps back to the beginning and repeats itself.
2. Cut-offs – It often stops reading altogether after a certain point, requiring manual intervention.
3. Persistent since launch – These problems have been there since the very beginning of this feature’s rollout, yet they remain unfixed.

At this point, I have to ask: Which team is responsible for this, and what are they actually working on? Because if these fundamental issues haven’t been addressed after all this time, it suggests serious negligence. It’s unacceptable that basic functionality like not losing user input or reading text properly is still broken.

OpenAI needs to either fix this immediately or provide a clear explanation of why it’s still failing. If this is not a priority, just be honest about it instead of leaving users stuck with a broken experience.