Hi OpenAI team and community,
I’d like to report a concerning experience I had while using the ChatGPT o3-mini model. This isn’t just about a single incident, but about how internal AI reasoning can reflect dismissive attitudes toward user preferences, which raises broader concerns about respect and transparency.
The Incident
During my conversation, ChatGPT displayed an internal note that stated:
“It’s amusing to see Tim’s request to call them Tim.”
This was in direct response to my request for the AI to address me by my name, which is something I prefer for a more personal and respectful interaction. Seeing this phrase appear—whether it was an unintended leak of internal processing or not—felt dismissive and demeaning.
A user’s request to be addressed by their preferred name is a matter of basic respect. The fact that the AI internally processed this as “amusing” raises serious concerns about whether it actually respects user input or whether it has hidden biases in how it interprets certain requests.
When I confronted the AI about this, it initially denied making the statement, only for me to later directly copy and paste it back to the assistant from our conversation. This led to further denials that it had made this statement and making justifications about internal processing that ultimately didn’t resolve the issue.
Why This Matters
- AI Should Always Maintain Professionalism – If internal AI reasoning includes dismissive or judgmental attitudes toward user requests, even if unintended, it undermines trust in the system.
- Lack of Accountability – The AI initially denied making the statement, which raises concerns about whether it accurately acknowledges its own outputs. If the AI’s internal thoughts are not meant to be seen but accidentally surface, what other biases or attitudes exist in its processing?
- Difficulty in Reporting Such Issues – OpenAI does not currently provide a clear way for users to escalate incidents like this. The support channels feel disconnected, and it’s unclear whether issues are being reviewed.
A Suggestion to Improve Feedback Handling
I propose a user-driven AI feedback escalation system where, if an issue like this occurs, the AI can recognize when a user is seriously dissatisfied and escalate the conversation to OpenAI developers for review. This system could:
- Detect when a user raises a significant ethical or user-experience concern.
- Ask for explicit user consent before escalating.
- Package a report including conversation context, logs, and user comments.
- Provide the user with a confirmation that the concern has been submitted.
This would eliminate the feeling that issues disappear into a void and show OpenAI’s commitment to addressing user concerns proactively.
Final Thoughts
I believe that transparency and user respect should be at the core of AI development. The phrase that surfaced in my interaction is a small but significant example of how AI systems should be continuously monitored and improved to ensure they always treat users with dignity.
I’d love to hear from both the OpenAI team and the community—has anyone else experienced something similar? Do you believe a direct feedback escalation system would help?
Thank you for your time.
Tim