The Great ChatGPT o1 pro Downgrade Nobody's Talking About

OpenAI’s $200/mo ChatGPT Pro: The Great Downgrade Nobody’s Talking About

Let’s talk about what’s happening with OpenAI’s $200/month o1 pro tier, because this is getting ridiculous.

Remember when you first got access? The performance was incredible. Complex analysis, long documents, detailed code review - it handled everything brilliantly. Worth every penny of that $200/month premium.

Fast forward to now:

Can’t handle long documents anymore
Loses context after a few exchanges
Code review capability is a shadow of what it was
Complex tasks fail constantly

And here’s the kicker: OpenAI never published specifications, disabled their own token counting tool for o1 pro, and provided no way to verify anything. Convenient, right?

Think about what’s happening here:

Launch an amazing service
Get businesses hooked and dependent
Quietly degrade performance
Keep charging premium prices
Make it impossible to prove anything changed

We’re paying TEN TIMES the regular ChatGPT Plus price ($200 vs $20), and they can apparently just degrade the service whenever they want, without notice, without acknowledgment, without any way to verify what we’re actually getting.

This isn’t just about lost productivity or wasted money. This is about a premium service being quietly downgraded while maintaining premium pricing. It’s about a company that expects us to pay $200/month for a black box that keeps getting smaller.

What used to take 1 hour now takes 4. What used to work smoothly now requires constant babysitting. Projects are delayed, costs are skyrocketing, and we’re still paying the same premium price for what feels like regular ChatGPT with a fancy badge.

The most alarming part? OpenAI clearly knows about these changes. They’re not accidental. They’re just counting on the fact that without official specifications or metrics, nobody can prove anything.

This needs to stop.

If you’re experiencing the same issues, make some noise. Share this post. Let them know we notice what’s happening. We shouldn’t have to waste our time documenting their downgrades while paying premium prices for degraded service.

OpenAI: if you need to reduce capabilities, fine. But be transparent about it and adjust pricing accordingly. This silent downgrade while maintaining premium pricing isn’t just wrong - it’s potentially fraudulent.

12 Likes

I fully agree.

I recently subscribed to the pro version thinking that this kind of thing wouldn’t happen, compared to the plus version of course.

Another detail that really caught my attention is not being able to use o1 in projects, it would be something really useful, and basic, although it wasn’t specified, I considered it was something logical that would happen in o1 pro, especially taking into account the price that is paid.

Beyond this, which was a presumption of mine, I am really quite disappointed since I have not yet managed to work fluidly after upgrading to Pro, taking into account the price difference I spent several days checking reviews and it seemed functional, but perhaps as you indicate, as the weeks went by this was degraded.

2 Likes

I can’t agree any more, the same issue happened to me.

3 Likes

I encountered the same issue today. Before today, ChatGPT Pro was working as it should. After managing to fix the issue, I realized that it occurs due to frequent interface updates, during which something goes wrong.

Here’s what I did to fix the problem:

  1. Opened the Developer Console in Google Chrome. (F12)
  2. In the settings (the cogwheel in the top right corner), I checked the box for “Disable cache while DevTools is open.”
  3. Pressed and held the page reload button for 3 seconds (while DevTools was open), after which the option “Clear cache and hard reload” appeared. I cleared the cache a couple of times this way.
  4. Logged back into my account on chatgpt.com.
  5. Switched to the ChatGPT Pro model and asked it to write an HTML calculator.
  6. Everything started working.

I tried that and in my case unfortunately it doesn’t solve it, even from incognito without any kind of cache, the limit seems to be close to 600 characters for me on o1-mini, o1 or o1-pro, maybe they have imposed some kind of cumulative global limit, or this is a bug, but for me it’s been unusable for days, actually since I upgraded to Pro days ago :man_facepalming:

3 Likes

The worst part is the reduction of input & output context length in all o1 models, making it shorter than 4o, which makes o1 99% useless lol.

Literally, OpenAI devs need to provide a concrete reason why this is happening, not run away and ignore it.

1 Like

The strange thing is that so many days go by without a resolution. After 3 days of upgrading to Pro, I haven’t even managed to make any progress with o1-mini. The degradation is so great that it makes it unusable, not to mention o1 or o1-Pro.

I never imagined that after paying 10 times what I paid for Plus, the result would be like this.

The curious thing is that there are no major repercussions, so I understand that it affects a small part, I imagine, of those who pay for Pro and that it is something relatively recent, since even the reviews that led me to make this investment spoke wonders of it.

Beyond obviously canceling my Pro subscription if it’s not resolved, I’ll try to ask for a refund, and continue with Plus, along with Claude Sonnet, Cursor and other subscriptions, my initial intention was to be able to use the o1 Pro for complex algorithmic problems that these other models were incapable of addressing, and that in theory with the “reasoning” of o1 Pro I could try to address, in an abstract way, but the reality is that it’s too big a disappointment, incredible, I don’t doubt that o1 Pro has that capacity but offering a subscription so openly of $200 per month for something with so many problems is disrespectful.

1 Like

Same experience here. Other models are also only garbage now

1 Like

It worked like a charm for a week and about 7 days ago started the real downgrading on the service. It seems like it is intentional, or a huge lack of computing power so they had to turn down the service quality. Anyhow, OpenAI knows the reason, they are just not letting us know. Personally I’m discontinuing my plan with OpenAI, if I will not get the same level in a few days.

1 Like

I’ve been following this thread with great interest. Currently, I’m on the Plus subscription and have been contemplating the jump to Pro. However, reading about the performance declines many of you have highlighted is giving me serious pause.

It sounds eerily similar to the downturn we’ve experienced with the Plus tier since last August. While it’s true that new features have been rolled out, they don’t quite compensate if the core performance is deteriorating, especially as more functionalities are extended to free users. Shelling out $200 is a significant commitment, and if Pro is mirroring the same issues as Plus, it’s hard not to feel a bit disheartened.

Would love to hear more of your experiences before making a decision.

i have same issue Issue with o1 pro Functionality on a Specific laptop

Try to use an old prompt that worked great and compare the outcome?

If u subscribe it via some platform, explain your situation to the platform and you can apply for a full refund.
I got mine refund from “fruit” with a bite

It’s really terrible, the current 4o and o1 feel almost the same as GPT-3.5, like a low-IQ version. A $200 product is no better than other free products.

Omg I am using o3-mini via API to not run out of requests.
Adds up to ~15$ per day but it makes me so hardcore productive…

I had to change the way I normally prompt completely.

Instead of prompting

write me XY that does A and B and C in the same way like A but different… etc

I just do

Write me XY but give me options like in a roleplay - you give me 5 examples to chose from of stuff that logically belongs into XY and then I select what I want and then 5 proposals again and then i select and so on. And when I say stop you create XY

and then I only have to use single words to correct it or say yes give me 1 and 2…

And I get it you all heard of opensource and Deepseek. But the reality is: if you don’t have multiple RTX 4090 or better you won’t be able to run the 70B version.

On my old computer with I9 and just a single RTX 4090 the 32B version is unusable slow.

Gemini? Lol a joke at most

Claude? Yeah go for it - seems to be better suited to frontend development

but omg o3-mini You are complaining? ARE YOU KIDDING ME?

I am using o3-mini over API - which adds up to ~15$ per day (in openwebui) so it is not really cheap to use - but omg you can do stuff that gpt-3.5 was definitely never capable of. Not even close without tons of agents to control and correct the output in agentic workflows.