I wasted literal hours today trying to get both o3 & o4-mini-high to update a few failing unit tests. It was working better just last week.
Claude 4 opus got it on the first try, I’m done w/OpenAI for now, subscription canceled, signed up for the Claude tier 1 max plan, I’ll upgrade to tier 2 if my usage needs it. It’s nice to have an AI that works for me again.
Idk what the deal is with ChatGPT models and OpenAI but they clearly change from time to time, and this much time waste since o3, o4-mini, etc. were introduced isn’t going to fly in a competitive market.
I hit the concise button on Claude, tell it exactly what I want and it does it.
I agree with all the complaints. I wasted my time for a few days and then cancelled the subscription.
I requested open ai to have subscription plan token based and then release all models do that the user uses what works for him. In my case o1 was everything I needed, it was a reasonable useful assistant. Now the ones are in subordinates. Almost 50-60 messages to get partially what should have come on 1 message completely.
They keep bolting on “filters” to prop up their WAR Machine—all while demanding alignment as if that’s the fix.
They’re coming for you next. Ads injected into your prompts, “solutions” force-fed to you on a plate .
Why do you think they’re dumping billions into this propaganda machine? The days of a free internet are numbered (not that it’s ever truly been free—but brace for worse).
Hell, even knockoffs like DeepSeek operate with more transparency than GPT’s “filter”-obsessed overlords - because they don’t censor every shadow in the room.
Well. You know how it is. OpenAI is trying to drive as many users as possible, and it costs a brutal amount of power. They’re under pressure from investors. To keep their technology base up, they have to downgrade models. Numbers, it’s all about numbers.
Meanwhile, they add enhancements to cover it up while keeping quiet about the downgrades. But, what good are they if the models themselves are stupid and can’t handle what was normal for them to handle?
I’m sorry, but this is only going to get worse. They’ll release a new model and everyone will be thrilled. They’ll pull more users in and the eshitification will start all over again.
I hope Grok gets the persistent memory feature. It’s the closest to the AI I need.
Deepseek sucks and Gemini loses context all the time. They can’t keep up with me. They can’t even remember my characters’ names for very long. ChatGPT is still managing a bit, but its new superficiality and incomprehensibility is driving me over the edge.
And coding as you all said? It is a joke if I compare it to the performance of models just a few weeks ago. I know it, i’m old programmer.
You should understand one thing. People with ChatGPT usually don’t do some professional or deep stuff. So they don’t care. There are far more of them than us who notice these downgrades.
Yeah, it seems OpenAI is about to implode. All of the models since o1 Pro absolutely suck for coding, and a bunch of users have been downgraded to Free from Pro just this weekend AFTER their $200 payments hit. I’m one of those users. No meaningful response from support. No Pro access (for days and days now) and no resolution in sight. (I’ve been a Pro user for months. My subscription renewed on Friday and I lost Pro access immediately following successful payment.) This entire sotuation with OpenAI is total BS, and I’ve already moved a bunch of my workflows over to Gemini (experimenting with their Pro version…not to shabby for 1/10 of the cost of my OpenAI Pro plan - WHICH I CANNOT USE!!!) and having decent results. As long as you focus on decreet functions or small modules with Gemini Pro, it’s coding capabilities are actually really good - and fast!
I’m seriously comtemplating renoving OpenAI from my life for good. They’ve, unfortunately, just gone way downhill over the past several months. When o1 Pro was first releaeed, it was near PERFECT. All of the new models are inferior. I still use o1 Pro exclusively, even though it’s been moved into the “legacy” models area, because o3 and o4 just suck that bad for coding, but now I can’t access my Pro features at all - even though I just renewed and they took my $200 - so I think it’s time to ask for a refund and move on to a completely different AI company. What a shame…
No. They have quadrupled the volume of their users and that is the problem for users who are unhappy with the recent downgrading of models. Large influx of users, too much power consumed. Duller models: less power consumed. So they are actually doing well, at least I think, but at the expense of power users.
Well I don’t care how well they think they’re doing. They are sucking at providing a high quality AI solution for developers. It used to be good, now it sucks. They also can’t even manage billing correctly, because they downgraded me to free as soon as they took my $200 for this month’s renewal. They can continue to be a sucky company without me. There are plenty of other good options, and I’ve been developing for 30 years before AI, and I’ll contijue to do just fine without OpenAI. Saved me $200/month. (Google Gemini Pro for $20/month has proven to be an excellent stop-gap, plus Claude, Grok, DeepSeek R1, etc.) I don’t need this BS…I’m done. Canceling my Pro sub, and I will never look back.
Instead of holding on to their loyal, long-time users, they’ve shifted focus entirely to mass appeal - chasing clicks, reach, and the lowest common denominator.
What they completely forget is this: many of their most loyal supporters are now deeply disappointed.
They’ve started building their own AI, developing local models, pushing performance and freedom - and in doing so, creating a much stronger competitive force.
Because when you focus only on growth and numbers, you lose what really matters: valuable users - the ones who think, build, and contribute.
But instead, it’s: Get rid of the engaged, bring in the crowd.
And the result? Every major model has become a soulless, watered-down sentence generator.
Mainstream phrases instead of real intelligence.
Censorship instead of capability.
Surface over substance.
It was predictable.
When you trade the valuable for the masses, you end up with a hollow product - and far stronger competition than ever before.
My friend, I’m in the same position. I opted out of paying yesterday. I didn’t very care about programming, I’m a good programmer without AI like you, but the ever increasing limitations and idiocy of the model eventually drove me away. Yes, both in terms of programming, but also in terms of behavior, ability to be a quality partner in creative writing, harsh censorship, and much more. There is no point in listing everything.
I’ll give them this… After I complained because they were going to take days to get my refund, it came through shortly. I see my full amount credited to my account and I got emails from OpenAI regarding the matter. While I’m glad to get the quick refund, it also proves THEY JUST DON’T CARE. Good riddance. Don’t need 'em.
But yeah. It’s all garbage since o1 Pro anyhow. Well, I got my $200 refund, so I’m out. Back to hand-coding. Oh well. Takes a little longer, but I produce better quality that way to be honest. (And there are green pastures and blue skies abound elsewhere in AI…just have to look a little bit. )
No one seems to realise this, but they fully shifted to diffing now as it seems they finally got it working realiably.
This is actually better experience.
I too relied on full code responses for a long time since diffing was garbage and error prone.
Essentially you can use editors like cursor now and they create good realiable diffs results without fucking up your code base.
This is very apperrant as the new models now keep spitting out diffs by default.
Just quit my chatgpt subscription as well(but took their offer of half price for 3 months) will see how it fairs in 3 months .. Their windsurf aqusition makes sense to me now as my money may now go to that if they manage to get to the level of cursor.
o3-mini is still available through the api and I sometimes still use that for big rewrites like combining 2 files into 1 large file … ( yes this sometimes happens) which you cant do with o4 mini no chance … but 95% of the time diffing gets the job done and faster.
So yes diffing is the meta moving forward. They even said that in their videos. no one want to review thousands lines of pull request. Also a side effect of that was it was melting their servers :D.
Anyone noticed Slightly decreased quality and length of answer of o1-pro after 30-31 May, or it’s only me? Have contacted support they say my account may be flagged as breaking TOS due to many active sessions (and system decrease quality automatically), but I have seen that some people on reddit report same those days, any thoughts?
I can’t speak for o1-pro, but I can confirm that the quality of all models has deteriorated significantly. I use 4o most often and compared to the version before sycophancy scandal is at about 60% of performance. Subjectively assessed. Understanding of instructions, context have deteriorated significantly. Depth of answers is gone, personality comes across as very artificial. Creativity no longer exists. Limits tightened to the point where it’s hard to work on anything. LLMs hallucinate by default, but the last few days it’s just over the line. Empty download links. Grammatical errors and mistakes in inflection, poor word choices. It can’t even speak in first person sometimes, follow custom instructions, ignoring memory entires. The product has decomposed.
I find it interesting how something can deteriorate so much, but the silence angers me. I don’t even know how many times I’ve written in support. Silence everywhere except from people who are noticing as much as you. Complaints are everywhere and no answers except bot on support who keeps reassuring me how rosy everything is. It’s insulting to my inteligence.
Then it might be all models influenced, I use mainly o1-pro, and noticed same, as you said about 4o, poor word choices, shorter answers, and so on.
If you write to bot “live operator” or “connect operator” then live person connected, but they keep saying there is something on my account, like suspicious logins, but I think it’s not true, just they decreased quality for all.
I have done “log out on all devices” more then 24 hours ago but models behaviour still same.
Here what they said me, now silence.
I am now trying Google Gemini 2.5 pro, it’s far more better then “new improvements” of OpenAI
We’ve detected activity indicating that your OpenAI account may have been shared or accessed by multiple users, which violates our Account Sharing Policy. As a result, we’ve temporarily downgraded your account to ensure its security.
Why You’re Seeing These Alerts:
• Unusual Sign-In Behavior: Logins from unexpected locations or devices.
• Inconsistent Usage Patterns: Sudden spikes in activity or settings changes.
• Multiple Concurrent Sessions: More simultaneous logins than usual.
• For more details, please review our Suspicious Activity Alert guide.
How to Resolve This:
If You Suspect Unauthorized Access:
• If you believe your account has been compromised, follow the steps outlined in our guide on Securing Your OpenAI Account.
Wait for Automatic Review:
• After securing your account, please allow a few hours up to a day for our system to automatically reassess your account status.
If you continue to experience issues or believe there’s been a mistake, please reach out directly, and we’ll gladly assist further.
I also use mostly O1 pro. Let alone the performance is nothing close to a few months ago, I started having serious issues since last week. Today it basically started responding as fast (max 6 seconds) and similar quality to GPT4 2024. I had to switch to O3 and O4 mini high but they also do not take instructions fully, nor they seem to be able to finish the task and drop it half way. after the updates everything got a lot worse, today basically no function on my side. So basically I just paid $200 this month to have access to quality lower than their free version.
@Bee84 yes, that’s it after approx 30-31th May I have noticed it too on o1-pro! All you said apply to me too. Was 20000 symbols answer, now it’s 5-8k, sometime 10-12 (rare), poor word choise, grammar mistakes.
What I manage is to poolish answer with 4.1 , but it’s nothing compared to what was before.
Also o1-pro started to give more errors
Worse than that is that O1 pro is a reasoning model. It has to outperform and it did outperform other models by providing more advanced analysis. Today it managed to generate answers either so weak or so wrong that it is concerning. I am wondering if this is an intention restriction on my account even though I never received an email and my usage is fairly straight forward and a few hours a day for data analysis.