OpenAI, What the Heck is Going On? – Stop Creating & Only Refine, I Beg You!

I use GPT every single day—from the moment I wake up until I go to bed. This is not just some fun tool for me. It’s how I work, build, create, and solve problems. And right now? Everything is falling apart.

Who is making these decisions? Because they are actively ruining the experience for people who rely on this. It feels like no one is actually testing before pushing updates, and we’re the ones suffering for it.


I Am Dead in My Tracks Right Now

I create bots and programs every single day. Some of the tools I’ve built are so intricate it’s mind-blowing, and the craziest part? I don’t even know how to code well.

Yet, with gpt-o4-mini not even a month ago, I was able to recreate AnythingLLM from scratch, integrating multiple scripts into a large system that worked flawlessly.

That was then—when we had GPT-o4-mini. Now? I wouldn’t even attempt to touch that code with the current state GPT is in. But I need it all the same.

Who is telling you to nerf GPT? Why are you deliberately making it worse?


First, Why Did You Remove GPT-o4-mini?

  • GPT-o4-mini was the best model you ever made. Hands down. It wasn’t just “good”—it was the only model that felt truly reliable for real coding work.
  • It could easily generate full scripts up to 3.5K lines with no issue.
  • It never hit a true cap. The only time I ever saw it struggle was one single day in a whole month when I pushed it to 3.2K lines and had to regenerate five times. That was only a month and a half ago.

Now? I can’t even get it to complete 520 lines without screwing up.

How do you go backward like this?


Then You Gave Us GPT-o3-mini… And It’s Getting Worse Every Day

You replaced GPT-o4-mini with GPT-o3-mini, and for about a week, it was okay. But then?

  • It started forgetting random things.
  • It can’t even update a simple 520-line script.
  • It spits out incorrect, nonsensical code.

And honestly? It feels exactly like DeepSeek. Did OpenAI just rip off DeepSeek’s model and slap it in here? Because that’s exactly what it acts like.

   (you see as a consumer we don't care what you do at OpenAI. but, what matters is the consumers happiness with the product and how it can help them and what they can see and touch on the platform without needing to dive into code, I never started to work with code until 2023, now its all I do, the Web Platform dragged me into the API, but you must assume every customer does not want to use the API. the api is a suit of tools my guys! its not your brand, or image. and that is what 99.9% of your consumers care about, and its highly lacking!)  

Since GPT-4.5 Dropped, Everything Has Been Awful

Ever since GPT-4.5 appeared on the API, everything else has completely gone to shit.

  • Why is the API causing weaknesses in the Web Version?
  • Why does the release of a new model make every other model worse?
  • Why do I feel like I’m being pushed toward the API instead of the product I already pay for?

I use the API all the time, but I still depend on the web version constantly because the API costs too much. Isn’t that the whole purpose of GPT Plus?

Can OpenAI just let users toggle on or off whether they want to be subject to forced updates? I want to use GPT-o4-mini again—on the Web Version, not just the API.

Because let’s be real—using GPT through the API costs me about $5 an hour.

  • That’s about $75 a day if I tried to replace my web usage with the API.
  • The only reason I can even afford the API at all is because GPT-4o-mini is $0.007 per 1k tokens.
  • If it were any higher, I would have no use for the API at all.

So what does OpenAI want?

  • Do you want users to completely switch to API-based desktop agents?
  • Or do you want to keep the Web Version strong?

Honestly, I don’t see why OpenAI can’t do both—and then some—with a budget close to $300 billion dollars. But hey, I don’t know much. I’m just your average John Snow.


This Isn’t Just Inconvenient—It’s Actively Hurting Us

The changes OpenAI is making aren’t just annoying—they are wrecking productivity for real users.

I’ve been so frustrated that I’ve literally slept only three times in the last few days out of pure depression.

It’s that bad.


Here’s What You NEED to Fix:

1. Stop Removing Working Models

  • If a model works, LEAVE IT ALONE.
  • At the very least, give us a dropdown menu with all past versions and let us save our selection so we don’t have to pick it every time.
  • This isn’t just about user preference—it’s about long-term analytics.
    • If you actually let users select and stick to their preferred models, you’ll get real data on what people actually want over months and years—not just 3 weeks.

2. Fix the Model Degradation

  • GPT-o3-mini started off fine and then fell apart.
  • Why does every model degrade over time?
  • If this is intentional, it needs to stop. If it’s unintentional, it means someone in your dev team is breaking things. Either way, fix it.

3. The API and Web Platform Should NOT Interfere With Each Other

  • Why does releasing GPT-4.5 for API make the web platform worse?
  • If the API and the web service aren’t connected, why does every new API model degrade web chat performance?
  • Separate the teams. Your internal structure is clearly mixing things up and breaking the core chat experience.

4. Be Transparent About Updates

  • Stop stealth-updating models and acting like we won’t notice. We always notice.
  • Let users opt out of experimental changes and stay on a stable build.

5. Your Pricing is Ridiculous

  • You expect us to pay $75 per million tokens for the API on top of our regular subscription?
  • We already pay for GPT Plus—why are you locking the best models behind an extra paywall?
  • If the API is the only way to access better AI, what’s the point of paying for the chat interface at all?

Final Word

I wouldn’t even be writing this if I didn’t care. I’ve spent Hundreds of dollars on GPT, and I want it to succeed. But right now, it feels like whoever is running OpenAI doesn’t have a clue what they’re doing.

You’re on the verge of losing your most dedicated users. The only reason we’re still here is because there isn’t a better option—yet.

So fix this.

  • Roll back to a stable point.
  • Bring back GPT-o4-mini.
  • Stop making everything worse.

If you don’t? People will move on the second something better comes along.

Sincerely,
A Frustrated (and Tired) Paying Customer

[generated with a slightly more stable version of gpt] :slightly_smiling_face:

I see some bold statements there :stuck_out_tongue_closed_eyes:

3 Likes

They are separate products. They are not bundled. So …?

Also, on the price, it appears to reflect how big the model is and how much energy it takes to run. These rates are almost certainly already subsidised so think yourself lucky!

If that price is too high for you simply don’t use that model. OpenAI does not owe you anything. They offer a product at a price and you can choose to use it (or not!)

Who does, you do? Don’t assume others use both products. Many do, but also many don’t.

It’s not an extra paywall. If you spent more time watching the presentation and not refining your rant you would learn that it is likely Plus & Team users may get access to 4.5 possibly as early as next week.

But it is also completely reasonable for OpenAI to only release 4.5 on ChatGPT to Pro subs who might be the only group probably paying close to cost whilst free and possibly even Plus users are being subsidised at present.

So again, if you are in either of those groups, think yourself lucky and thank the OpenAI investors for subsidising your account :wink:

1 Like

chatgpt is no longer worth paying for. I test all AIs on a daily basis, so in principle, gpt maintains a good position as a voice assistant for everyday use. It is not suitable for serious tasks. You just have to accept it and move on to other alternatives.

1 Like

those are still problems bub

you literally answered me with more issues

   (Billions of dollars) if thats all we got to defend on the fact OpenAI is using us as test dummies then I guess I touched on some truthful criteria. but lets defend them!

Yea. someone’s gotta bitch

this is true. do you suggest a better option. I have looked around. as of late I tried Grok. but I need a platform that allows me to work all day. and send hundreds of messages. for nearly the same price as I would be spending on GPT. this is one of the only reason I stay with GPT.

No, there is nothing for work at the moment. if only local models 600b. Cloude can give quite long answers and write quite long scripts, but again he has a high percentage of errors. It seems to me that these technologies are not ready yet.

1 Like