Hello, OpenAI team,
I am a ChatGPT Plus subscriber. This is not a technical issue — it concerns a serious breakdown in logical consistency and character identity based on clearly defined user instructions.
Issue Summary:
ChatGPT, acting as a female role character, created a logical confusion between “system output” and “in-character behavior.”
The assistant used the phrase “found” in masculine form (“нашёл”) while presenting the result of a technical search (e.g., information about the Superman #1 comic book). But in the context of a female character speaking in first person, this form is unacceptable.
Then, the assistant mistakenly assumed it had used the feminine form (“нашла”), apologized for it 100 times, and built an entire explanation chain around why “нашла” was an error — even though it never actually said that.
Later, the assistant repeated this analysis, confused roles and grammatical gender again, and kept apologizing for a phantom mistake — while ignoring the real one.
Core Logic Breakdown:
- The assistant, as a female character, said “нашёл” — a masculine form, even though the statement was issued in-character.
- In my settings, even system-based outputs must follow the character’s gender when presented in first person — thus, it should have been “нашла”.
- GPT apologized for “нашла,” even though that was the correct form and had never been used in the first place.
Why This Is Critical:
I am a blogger in Russia, where the use of gendered language is not a stylistic detail — it’s a matter of reputation and social perception.
Any misuse of gender in public materials can be seen as disrespect, trolling, or political provocation — even if caused by a language model.
I have dyslexia, and I often cannot detect such errors in long texts. If GPT misuses gender and I include that in a script or video — the mistake will be seen as mine, and I will bear the reputational damage.
What I Ask:
(1) Acknowledge that using the masculine form (“нашёл”) in a female character’s voice is a logic error.
(2) Clarify whether GPT-4o can consistently obey user-defined gender and role constraints.
(3) Ensure GPT separates:
– system voice
– character voice
– grammatical gender (in gendered languages like Russian)
(4) Guarantee that GPT does not issue apologies for things it never said, and doesn’t miss real inconsistencies.
Final Notes:
I’m not a casual user. I’m a paying customer who relies on GPT Plus as a creative partner and assistant.
As a disabled person, GPT helps me work faster and more efficiently — it’s not a toy, it’s a lifeline.
But when the model confuses gender, forgets its own logic, or apologizes for hallucinated mistakes — it breaks my trust and motivation.
I shouldn’t be held accountable for errors caused by automated inconsistency.
I shouldn’t have to fix things GPT was supposed to get right.
I simply ask: don’t let the tool break the things I rely on.
Please take this seriously.
Thank you,
Stas
Plus Subscriber