I can’t be the only one noticing this, but O1 Pro has gone from being fast and smart to just fast—and extremely dumb. It’s as if the model is in overdrive, spitting out answers without any real thought behind them. It’s like it’s rushing to get to the conclusion, but in doing so, it’s ignoring the process and missing critical logic.
I paid an extra $180 over the basic version for one simple reason: I wanted it to be smarter. The whole point of paying more is that I need the AI to be a bit more careful, thoughtful, and precise. But now? It’s like a turbo-charged machine that can’t do anything right. The AI gives answers quickly, but they’re so full of mistakes it might as well have given me nothing at all.
If you’re debugging code, O1 Pro spends 60 seconds generating the wrong solution and then 60 minutes trying to fix it. That’s a total waste of time. I didn’t pay for this speed—I paid for intelligence.
At this point, it feels like OpenAI is cutting corners, sacrificing quality for speed, and it’s completely missing the mark. I’m not paying extra for speed alone. If O1 Pro is going to be this dumb, it has no real value.
Most O1 users feel the same way: We’re not paying for faster mistakes. We’re paying for smarter answers.
Please bring back the intelligent, thoughtful O1 Pro. Don’t let us down like this.
I completely understand how you feel. I’ve been experiencing the same issues with O1 Pro, and it’s really disheartening. This situation has seriously affected my work progress, making it difficult to rely on the tool as I once did.
I’m now questioning whether the $200 I paid is truly worth it. If these problems aren’t resolved soon, I might have to reconsider and switch back to the $20 version when it’s time to renew.
I hope OpenAI can address these concerns and bring back the intelligent, thoughtful performance we initially valued.
We should give these guys a bit of a break; it could be the system is unstable or they are experiencing a lot of malicious workflows or other issues that need to be stabilized. All of us want it to be smarter. I also want it to be creative which means I guess hallucinate more, or at least have the old temperature slider back. But this is one of the most amazing technologies ever created so lets expect a lot of issues before it settles down. I kind of enjoyed it when it started to go wrong, in the sense that it began getting things so wrong I thought i had dropped some psilosybin or something. At the time i just thought i was pushing it passed its limits, but that’s only happened once, so maybe it was having a bad day. When its being dumb, i use that as an opportunity to ask what it would do if it was being smarter, and try to think that way myself.
Thank you for sharing your thoughts and understanding the situation. I really appreciate your perspective on the current challenges with O1 Pro.
I am eager to get back to the original O1 Pro, as it greatly improved my work efficiency. Could you please let me know if O1 Pro has any weekly usage limits? I haven’t been able to find any information regarding this.
Update: After posting my concerns, it seems like my account has been switched back to the better model. I don’t know if it’s because of my post, but I’m relieved to see the smarter O1 Pro again. I really hope OpenAI can keep it this way.
Those of us who are willing to pay an extra $180 for this model clearly care a lot about its intelligence. We’re tackling complex problems that demand precision and depth. As Sam Altman mentioned in his tweet, people are willing to pay significantly more for improved performance. But with that price tag comes high expectations—don’t risk ruining the trust of your most invested users.
O1 Pro is a premium product, and its quality reflects directly on OpenAI’s reputation. Please don’t compromise what makes it special, especially when it’s priced as a top-tier option.
Even your CEO, sam altman (yes, the lowercase-loving hype machine) , clearly understands why users are willing to spend $200 on this model. We are here for more intelligence, as he says, to solve really hard problems. Does he even know what’s happening here, or is he just pretending to be clueless?
Or has this company lost all its integrity? Are all the honest people gone, leaving only a scam operation centered around Altman? Stealing $200 from your loyal users with shady model swaps is disgraceful. Reflect on your actions, OpenAI. It’s time to rebuild trust before it’s too late.
You make a very good point. I’m not sure if your issue has been resolved yet. I tried using it myself, but it still felt quite basic—providing simple responses with several errors. This has left me very disappointed with the O1 Pro. I hope the OpenAI team can address these problems and restore O1 Pro to its original level. Otherwise, I might feel that the $200 investment was not worthwhile.
I had a team account because plus was to small for me and I needed to work continuously. I was using the O1-preview every chance I could until CAPs hit. Pro Sub now I use which is helping me keep moving forward always.
I don’t find it at all less in terms of behaviour still works with my full code with very little errors and I work in some really long stuff that does not work so well on the older models for understanding.
Well $200 is a lot for a full stack dev like myself though with out having to take breaks when working on complex code its worth the extra.
What sucks though is teams was prepaid a year and my sub for it expires in May 2025 so I was not able to create pro sub on that account until the sub fully cancels so that means I have a teams account not in use lol and paying for pro on another until that other account expires than switching back to my other account and cancelling this. Wish that would come up with a better up and downgrade plan for smoother changes. At least pro is monthly and not require a year to sub.
I also encountered the same issue. Yesterday, the answer that O1 Pro gave me was outstanding and amazing, but today it only thought for a few seconds and gave me a very superficial and shallow answer. It’s really frustrating.
Hey, I get why you’re feeling good about the Pro model right now—it’s solid because they haven’t swapped it out on you yet. A couple of days ago, we were using the real Pro model too, and it was great. But now? It feels like they’ve watered it down. The Pro we’re on now isn’t the same as the one you’re using—the higher-end one you’ve got access to.
Trust me, we know what the real Pro feels like, and this ain’t it. That’s why we’re saying this—it’s not just a random complaint; it’s coming from experience. Hope they don’t pull the same move on your setup!
If you are willing to brazenly spend your money without understanding the value that you will get out of it there’s no one to blame but yourself.
o1 was JUST RELEASED for the pro plan. It’s insane that you want to pay for the most cutting edge technology, but also want it to be perfectly stable.
If you don’t find the value then just return to the $20/month plan. I’m sure that with time the models and the features that OpenAI offers will make that $200/month plan even more enticing, and the models will become more stable as well.
The issue here isn’t about expecting cutting-edge technology to be perfectly stable—that’s a risk anyone paying for early access understands. The real problem is that OpenAI is charging a premium and, in return, has a responsibility to deliver what it promises.
What’s unacceptable is not the instability of the model, but the sneaky practice of switching models without notice. That’s not just a question of technical limitations—it’s a breach of trust. When you’re paying $200/month, you deserve transparency and honesty, not bait-and-switch tactics. It’s unethical, plain and simple.
If OpenAI wants to maintain credibility, they need to align their actions with the expectations they’ve set by charging such a premium.
There’s been no bait and switch. If you found value then stay at the tier. If you didn’t feel free to use the $20/month one. Simple as pie.
I’ve been burnt many times by the pace and uncertainty of OpenAI. It’s what happens when you are on the front of a wild ride. If you don’t like the bumps then hang in the back.
I’ll tell you when it’s safe for you and your expectations
If you were facing academic pressure and struggling with complex problems like I am, you’d probably feel the same way. I’m not expecting any empathy from you.
Hitting brick walls with LLMs is brutal. I usually just trash the conversation and then try to approach it with different angles. Sometimes just talking about things in a different way is enough for me to rephrase it and destroy any of my own personal fallacies.
Me personally I prefer models like gpt-4o because it allows for more smaller, controlled and iterative processes that I can interject at anytime. I just don’t like how o1 has a hidden reasoning process that I can’t work with.
I have all the sympathy for fellow developers and I can only imagine how hard releasing something like this is, but $200 and massive immediate degradation is not acceptable.
I get no output for a lot of requests, just the empty “Finished thinking” message, and it’s pretty crappy that it started being bad after I got charged for the full next month.
I also get that compute is an issue and Sora might’ve eaten a lot of the resources, but if we pay you $200/month you better make sure you prioritize allocating enough compute for us, otherwise it feels like what we get is a big F.U.