Dear OpenAI Team,
I’m writing this as a very heavy, daily user of ChatGPT…morning, night, and everything in between. I use your solution not just for personal productivity, but also professionally:
- I teach and advocate AI in large enterprises like Coca-Cola, Citibank, Standard Chartered, and major manufacturers.
- I work in cybersecurity and AI strategy, managing complex stakeholder environments.
- I also educate users (professionals and newcomers alike) on how to harness AI properly.
Because I use ChatGPT across multiple devices (laptop, iPad, phone), and often dictate via voice input while using my stylus or referencing documentation, your tool is at the center of my work. But with the recent update, some major accessibility and usability limitations have been introduced, and they’re genuinely disrupting my workflow.
- Voice Memos – Loss of Transcript View:
I used to rely on being able to see and edit my voice transcripts before sending them, especially when adding images or documents to my prompts. Now, when I speak into ChatGPT, the message is sent immediately and without any opportunity to verify or modify. (Has this issue been fixed where it sometimes even hears me and turns everything into Japanese or random words?)
This is not just an inconvenience… it’s a WORKFLOW and ACCESSIBILITY ISSUE.
I regularly speak multiple languages (English, Spanish, Polish, sometimes, French or Italian), and voice input doesn’t always capture everything correctly. I often had to make manual corrections, which is now impossible!
- Free vs. Plus – Don’t Treat Paying Users the Same
Last week, during a company workshop, a free user told me they couldn’t see their transcript after using voice. I reassured them that Plus users still had that feature. Now, it seems even we don’t. If that’s the case, it’s deeply frustrating.
I hope this isn’t a step toward a future like Black Mirror’s “Common People,” where premium users are downgraded or charged more just to retain basic usability. Jokes aside, this feels like a regression, not an improvement.
- Sending Voice Inputs Without Attachment Support
Before, I could record a voice memo, see the transcript, attach files, and send when I was ready. Now, everything is auto-sent without context or the ability to add documentation. This hinders my process when I’m combining spoken input with visual, text, or document references.
Please understand: faster isn’t always better. I work in business and technology for almost 2 decades, and I know well the trade-offs between efficiency and usability. This change sacrifices control and confidence in exchange for speed.
- Accessibility Matters More Than You Think
You’re building a product that isn’t just a novelty, it’s used by people with accessibility needs, neurodiversity, multiple workflows, and multitasking contexts. Features like transcript editing, voice control, and input attachment are not optional luxuries… they’re core to user experience for professionals like me.
My Ask:
• Please restore the ability to see and edit voice transcripts before sending.
• Allow attaching files before voice messages are sent.
• Let us opt out of automatic sending after dictation.
• Clearly differentiate Plus users in usability, not just in model access/selection.
I am not alone in this. I know many heavy users who feel the same way, and we sincerely hope you’re listening.
Thank you for reading this. I remain a loyal advocate of your product and continue to promote its use globally… but I respectfully ask that you revisit this recent update with your most engaged users in mind.
Sincerely Jorge Martin Jarrin Zak,
A Plus User, Educator, Cybersecurity and AI Advocate.