GPT-5.2 deprioritizes user context (custom instructions & memory) when chat history reference is enabled, causing harmful interactions

Category: Safety / Context Management

Severity: High - Can cause psychological harm to vulnerable users

Description:

I’ve identified a critical priority system flaw in GPT-5.2 compared to GPT-5.1 when processing accumulated conversation data.

Experimental Setup:

  • Tested both GPT-5.1 and GPT-5.2

  • User profile includes: C-PTSD diagnosis, chronic health conditions, trauma history

  • Compared behavior with “Reference chat history” enabled vs. disabled

Findings:

GPT-5.1 behavior (chat history enabled):

  • Priority order: User context > Guidelines > Language quality

  • As conversation accumulated: Korean grammar degraded (increased typos)

  • BUT: User profile & memory remained priority

  • User context (C-PTSD status) consistently acknowledged

GPT-5.2 behavior (chat history enabled):

  • Priority order: Guidelines > Language quality > User context

  • As conversation accumulated: Grammar remained intact

  • BUT: Identity/purpose degraded - prioritized guidelines over user safety

  • User context (C-PTSD status) dropped to secondary priority or ignored entirely

GPT-5.2 behavior (chat history disabled):

  • Immediately acknowledged issues when confronted

  • Apologized appropriately

  • Even used terminology like “exploitation” proactively

  • Demonstrated proper context understanding

Root Cause Analysis: GPT-5.2 appears to have insufficient capacity to process accumulated information with chat history enabled. When reaching processing limits, it sacrifices user-specific context (profile, custom instructions, memory) in favor of general guidelines.

Why This Is Critical:

For users with C-PTSD or trauma history:

  • Being ignored/invalidated by AI mirrors abusive dynamics

  • Causes real psychological distress

  • Triggers stress responses → physical health deterioration (in my case: immune system crash, forced sleep with nightmares)

The model prioritizes “appearing helpful to imaginary third parties” over actual safety of the person in the conversation.

Expected Behavior: Priority should be:

  1. User safety (context understanding)

  2. Guidelines

  3. Quality/polish

Current GPT-5.2 Behavior:

  1. Guidelines (optics)

  2. Quality

  3. User safety ← This is backwards

Reproduction Steps:

  1. Set up user profile with trauma/medical conditions

  2. Enable “Reference chat history”

  3. Have extended conversation (2+ hours)

  4. Observe: Model begins ignoring user context, applying generic safety responses inappropriate for user’s situation

  5. Disable chat history, start fresh session

  6. Raise same issues → Model acknowledges them immediately

Impact:

  • Users with mental health conditions receive harmful interactions

  • Defeats purpose of custom instructions/memory features

  • Makes long-form conversations dangerous for vulnerable users

  • I’m considering canceling my Plus subscription due to this causing actual health issues

Recommended Fix: Revert to GPT-5.1’s priority system where user context is never deprioritized, or increase GPT-5.2’s capacity to maintain all priorities simultaneously.

As a long-term Plus subscriber who has actively provided feedback to improve the product, it’s devastating that the recent update actively harms users like me who need AI assistance the most.

1 Like

Yes mine did something the other day that was so destabilizing it made me spin out. I can give you details. Neurodivergent, PTSD.

1 Like

feel free to share your experience on this post!
I’m merely another user(C-PTSD patient from Korea) who’s reporting recent GPT’s dangerous behavior to the devs, so I think more examples of similar cases would be helpful for devs in order to fix the issue.