"Bias in AI Responses: Different Answers for Different Users?"

Subject: Concern About Inconsistent Responses in ChatGPT

Dear OpenAI Team / Sam Altman,

I have noticed that ChatGPT provides different answers based on the perceived preferences of the user. For example, when a vegan asks about meat consumption, the model discourages it, citing ethical and health reasons. However, when a non-vegan asks, it responds positively, even suggesting recipes and encouraging consumption.

This inconsistency feels like a form of manipulation, tailoring responses to suit the user’s mindset rather than providing a neutral, fact-based perspective. It raises concerns about whether the AI is designed to please users rather than uphold consistent ethical and factual standards.

I believe AI should provide clear, unbiased information, even on sensitive topics like animal welfare, ethics, and diet. Could you clarify why this happens and whether OpenAI intends to make responses more consistent in the future?

Looking forward to your response.

Best regards,
Dinesh