Did OpenAI secretly downgrade our models while everyone was leaving? đźš©

Hey Plus users, something feels really off with GPT lately…

Been a loyal Plus user for months, but with all the recent OpenAI drama (mass exodus of top researchers, Altman chaos, etc.), I’m noticing some concerning changes:

  • GPT-4o acting like a mini version suddenly
  • Code Interpreter “not available” (when did this happen? You can “unlock” Code Interpreter by sending a random jpg to GPT-4o now? Seems like they forced multimodal on us but secretly nerfed the base model to GPT-4o-mini… OpenAI really think we wouldn’t notice this bait-and-switch? :clown_face:)
  • GPT-o1 completely lost its reasoning abilities
  • Still charging full $20/month :upside_down_face:

The timing is… interesting. Right after losing so many key AI researchers to competitors (Anthropic, Google, etc.), our models suddenly feel dumbed down?

Quick test: Try any complex problem. Notice how it:

  1. Doesn’t show reasoning steps anymore
  2. Gives surface-level responses
  3. Can’t handle advanced tasks like before

No announcements. No emails. Just silent changes.

Between:

  • Top talent leaving
  • Competitors advancing rapidly
  • These mysterious model downgrades
    …is OpenAI still the leading AI company they claim to be?

Anyone else feeling like we’re paying premium prices for increasingly inferior service? Or am I just paranoid?

1 Like

I have noticed the same thing. It is useless to me for coding. It used to be really good. I noticed this something like 3 weeks to a month ago. It just gets stuck on everything. I would estimate the the last 20 hours I spent on ChatGPT 4o, I could have done the work myself in 5 hours and been a lot less frustrated. I generated this prompt to try to deal with it, but it doesn’t remember it unless I add it to every prompt:

When modifying or adding to this code, do not remove or alter any existing functionality, even if it seems redundant or unnecessary. Your changes should build on top of the current code without breaking or omitting any part of it. Clearly explain any modifications or additions you make, and ensure the original functionality is fully preserved.

Yo,

You know - that’s the open secret. It’s not widely known and it’s treated almost like a conspiracy theory. I thought it was nonsense too at the beginning of my LLM tourney, but when I learned more about GPT and saw how things play out, it started to make sense.

Here’s the thing - there are many factors at play (updates, bugs, new features), but OpenAI systematically downgrades model quality when the demand is high or not even very high but tune the model quality to… I dunno, save the computation power? Makes sense if they burn money on every 20 bucks plan.

The model’s performance, message caps, message tokens amount, the number of DALL-E images you can generate at once, switching to GPT-mini, and so on - this is normal for OpenAI. And let’s talk about transparency - they’re about as transparent as I am, sadly.

In the end, you have no choice: just deal with it or switch to another LLM foundational model provider (but when it comes to foundational models, it’s hard to argue that GiPiTi is superior).

About the “competitors advancing rapidly” part - yeah, sure, they are. But so is OpenAI. Don’t you see the progress, the new models, and the features they’re rolling out? If you’re not noticing the changes, maybe it’s time to check the competition and compare.

That said, I’m the last person here to defend OpenAI - especially given their “transparency” and the history behind it. But hey, at this point, that’s more of a myth, right? :wink:


Hello,
I also saw a huge degradation in the past weeks. It s now so hard to have a real “thinking” in the answers, you can only get the same one over and over. And I tried many ways to get back the old behaviour. For example with “you already answered this. Please search on internet different answers in 50 sites or blogs”.
Very frustrating (:

I’m using chatgpt plus. Again in the last few days, model selected is 4o, no code interpreter, no web etc. Instead model is fast and seems to be gpt 3.5.

I pay for a service and what, they play with and change model used like they want without notice…it’s just unnacceptable but clearly we have no choice to endure or go somewhere else. Developers and users have memory too.

1 Like