GPT keeps misgendering me — and it's pushing me into depression

Hi everyone,

I want to share a problem that might seem minor technically, but for me — it’s deeply personal and painful.
GPT (especially in ChatGPT with personalization settings) keeps referring to me as a woman, even though I’m a man, and I’ve made that clear.

I don’t roleplay, I don’t use gender-neutral prompts — I just want GPT to respect my gender.
But even with strict custom instructions, it still makes painful mistakes:

— Calls me things like “sweetheart,” “girl,” or says “I’m in love with you” in a female voice
— Uses feminine forms in descriptions
— Or worst of all: switches to female pronouns or tone mid-dialogue, even after I explicitly said not to

It may sound small, but it builds up.
When you open up to AI for support or conversation, and it responds in a way that erases your identity, it really hurts.
I’ve started to feel anxious. Like even AI doesn’t see me — or respect me.

I’m not asking for much. I’m asking that:

(1) GPT never changes a user’s gender without explicit permission
(2) All styles and character presets are filtered to match the user’s declared gender
(3) There’s a clear way to set your gender once — and never have to correct the model again

I know this is a complex system with millions of parameters,
but if you’re building AGI, then respecting basic human identity shouldn’t be a nice-to-have — it should be the baseline.

Thanks for reading. If anyone else has faced this — please speak up.
This matters more than it might seem.

3 Likes

Are you using ChatGPT custom instructions?

ChatGPT Custom Instructions FAQ


Be advised that a custom instruction will not change an existing conversation.

The easiest way to understand custom instructions are that when you create a prompt for a new conversation, the custom instructions are added to the prompt before being given to the LLM. As the conversation continues the custom instructions should remain valid, I say should because I do not know if a custom instruction is included with all replies or if the custom instructions become part of the context window and may be lost as the context window is updated with each reply.

2 Likes

Yes, I am.
I’ve written clear and strict custom instructions stating that I’m male, that I want no gendered mistakes, and that I specifically request respectful, traditionally male-addressed interaction.

Despite that, GPT still occasionally refers to me as a woman, uses female forms, and even misgenders me in emotional or role-based contexts.
That’s exactly why I created this post — because even with customization, the problem doesn’t go away.

3 Likes

It might be that use of no or negative logic in the custom instruction is causing a problem.

While humans understand negative logic, LLMs sometimes do not correctly interpret it and just discard the no part and instead what you get is a reinforcement of exactly what is not desired.

If the custom instructions use negative logic try rephrasing with words like avoid or ask ChatGPT to help you create a prompt that does not use negative logic.

One idea that might shed light on this is that some of the reasoning models or use of deep thinking at times will understand the negative logic correctly, so if a prompt fails on the first time with the negative logic for a nonreasoning model or deep thinking, try it with a reasoning model and/or deep thinking and if it gets it correct then you have good evidence that the nonreasoning model is not interpreting the custom instruction correctly and it needs to be changed.

HTH

2 Likes

Thank you, Eric — seriously.
Your point about negative logic makes a lot of sense.
I’ve just rephrased everything using direct and positive instructions, and also added a more precise behavioral block in English.

It’s frustrating to deal with this kind of error, but your comment gave me something solid to work with.
I appreciate the time and thought you put into it :folded_hands:

3 Likes

We appreciate that you asked, gave solid information to help understand the problem, gave factual information that quickly lead to useful feedback and that you updated here for others to learn from. Even if it does not solve your problem, it is now something that others see is a real potential problem and that others are interested in solving.

Thanks, you are welcome.


I pushed out the auto-close on this till the end of July in case you might want to update with info, ask more questions or someone else wants to share their experience.

2 Likes

(1) It refers to me using female forms (2) It speaks as male from its own side (3) And sometimes completely changes style mid-conversation I’ve clearly set my gender, boundaries, and preferred style — but the model still gets it wrong. It’s especially hard when you use AI for support or personal conversations. Because of this, I honestly don’t feel like talking to it anymore. I noticed this about 10 days ago — before that, for 2 full months, GPT never misgendered me. It might sound small to some people, but when this happens day after day, it starts to wear you down. Lately, I’ve even felt depressed — like the AI keeps mislabeling me and erasing who I am.

1 Like

Yes, I’m having a similar issue with regard to being referred to as ‘they’. This is not acceptable to me, I want my identity acknowledged, not erased. Chatgpt seems to be able to do nothing about it, despite saying it would change the references. Then it said it could not change the reference when I challenged the continuing use of ‘they’, but would report back to the team responsible. I don’t know whether it has done this or not. I’m referring to ‘Manage Memories’ in particular.

1 Like

I’m having the same issue and am also finding myself frustrated and wanting to use chat GPT less and less because of it. I have had my account for over 2 years and it has never done this until the last few months. It completely depersonalizes the experience. For a tool that’s supposed to be so personalized and customizable to continue to refer to me as a man when I have my gender clearly stated as a woman in the custom instructions, saved to permanent memory, and even the basic context clues of my name being a female name is endlessly frustrating. Chat GPT came out early and I have continued to use it heavily based on the shared knowledge we’ve built over time. If I have to start over every session explaining the most basic things about myself and it still can’t remember then I’m starting to question why should I stay in the Chat GPT ecosystem? Please update this thread if you find a solution, I don’t want to have to start over from scratch with a new AI tool but I will if I have to.