Urgent Issue: ChatGPT Pro ($200/month) Response Quality Worse Than Expected

Dear OpenAI Support Team,

I recently upgraded from ChatGPT Plus ($20/month) to ChatGPT Pro ($200/month), expecting a superior model and enhanced AI performance. However, I have noticed a significant decline in response quality, which is unacceptable given the increased subscription cost.

Key Issues with ChatGPT Pro ($200/month):

  1. Shallower responses – Answers lack depth and detail compared to what I previously experienced.
  2. Reduced creativity and accuracy – Many responses feel generic, repetitive, and sometimes even incorrect.
  3. Regression in reasoning ability – GPT-4-turbo under Pro is giving noticeably worse responses than expected.

Steps I Have Already Taken:

  • Restarted the ChatGPT app
  • Logged out and logged back in
  • Switched between Wi-Fi and mobile data
  • Reinstalled the app

None of these steps have resolved the issue. Therefore, I strongly suspect one of the following problems:

  1. My Pro account may not be accessing the highest-quality GPT-4 model. Could there be a misconfiguration on OpenAI’s side?
  2. Server routing issues may be affecting Pro users, leading to degraded response quality.
  3. A recent update may have unintentionally impacted the model’s reasoning and depth.

My Device & Account Details (For Investigation):

  • Email address: lonelylpf@icloud.com
  • iPhone model: iPhone 16 Pro Max
  • Operating system version: iOS 18.3.1
  • ChatGPT app version: 1.2025.043
  • Subscription plan: ChatGPT Pro ($200/month)
  • VPN usage: Yes

Action Required:

  1. Please confirm that my Pro subscription is correctly mapped to the highest-quality GPT-4 model.
  2. Escalate this issue beyond basic troubleshooting and verify whether Pro users are affected globally.
  3. Provide a clear resolution or acknowledgment of this issue.

I understand that I cannot revert to Plus for a direct comparison, but I am certain that my experience with Pro is objectively worse than before. This should be internally verifiable on OpenAI’s end.

Given that I am paying for the highest-tier subscription, I expect immediate attention to this matter.

I look forward to your prompt response.

Best regards,
[Lejon]

check this thread, maybe is /t/why-is-everyone-still-silent-about-this-bug-with-forced-model-change-so-tired/1127673/

Exact same experience. One thing I have noticed is that in some instances I was able to input a largwe amount of content (in a single input and in several batches), but other times I cannot even input a third of the exact same data. I also noticed that o1 and o1 pro general “behave” very differently in the way they reason. Sometimes when in pro, it suddenly starts behaving like o1 - so it switched. The outputs are also then different to the ones before. It’s incredibly frustrating as the lack of consistency and fluctuating quality is causing a huge amount of manual intervention and constant quality checking/reviews (which is also filled with issues). Today o1 pro has been experiencing multiple “technical glitches” (as it called it itself when asked). I have lost so much work over the last week it’s heartbreaking. At this point I don’t really see the difference between a $25 and a $200 subscription and I feel cheated.

I’ve been experiencing the same problems since this morning and I’m losing a lot of work and time. The quality of the responses has significantly declined and the tone of the responses has changed from familiar and personal to robotic and repetitive. Please look into this OpenAI Support Team! :pray:

This happens due to account being shared.