GPT-5 Coding Feels Downgraded — Please Fix This

I’m a Plus subscriber and my coding success with GPT-5 has been terrible compared to before.

When I was using GPT-4.1, it was better than o3 for my needs — more tokens, stronger coding output, and reliable problem-solving. Now with GPT-5, it feels like the coding ability has been downgraded. Scripts that used to work now fail, solutions are weaker, and the model is less consistent.

As a paying customer, this is frustrating. I can’t help but wonder if coding capability for Plus users has been reduced. If that’s the case, it’s a huge step backwards.

We need clarity on what’s going on — and ideally, fixes or the option to keep using the stronger coding-capable versions. This downgrade is bad for dev workflows, and I hope OpenAI takes it seriously.

94 Likes

It was so, but apparently it got a bit better today. The prompting technique needs to be different, it no longer assumes you are serious about the task, you need to tell it to… so sad. The thinking model performs much better, almost like o3

2 Likes

Well, for me, I’ll give them a week or two, then I’m jumping ship to try Gemini 2.5 Pro. I’ve been with OpenAI since GPT-3.5, but I’m just not buying this GPT-5 for now — it’s worse than what I was getting with GPT-4.1.

31 Likes

Well, yes, it is broken. I seriously wonder what idiot thought of putting python syntax into DLS. It’s messing up the system. I work with a specific environment: shell + YAML + Terraform DSL.

And put thinking and apologies into the o3/o4 RLHF models?

Then no one should be surprised that people are canceling their subscriptions en masse.

7 Likes

32k token limit and deceptive truncating rolling context window… thats what killed it for me. OPENAI is the worst option available. Claude is far better, and Gemini eats both for breakfast in context management.

5 Likes

Let’s stop pretending this is a leap forward. The truth is simple: GPT-4.1 was one of the best models OpenAI ever released — solid coding performance, consistent outputs, and widely used by Plus subscribers.

Then GPT-5 launches, and suddenly GPT-4.1 is gone. Not deprecated, not replaced — silently pulled from Plus access. And now we’re told GPT-5 Pro is the “real upgrade,” locked behind a $200+ per month subscription.

What’s worse is the marketing spin. OpenAI says “people wanted GPT-4o.” No, we didn’t. We wanted GPT-4.1 to stay. We never asked for a downgrade at $20/month while being upsold the same performance at 10× the price.

This is not innovation — it’s a paywall shift. We went from having elite access to being gatekept unless we pay enterprise-level pricing.

OpenAI, if you’re listening:

Bring back GPT-4.1 for Plus users.

Be transparent about what’s in GPT-5 Pro vs what was already in GPT-4.1.

Stop disguising repackaging as progress.

We’re not bots. We’re your early supporters. And we know exactly what you did.

32 Likes

Bring back GPT-4.1 for Plus users.

Fat chance. The goal for OpenAI is to get rid of the plethora of models and to offer one united model. This will take some time working out the quirks, but will be extremely benificial eventually.

If you are a true software developer, you must live by this rule: “The only thing that is constant is change.”

1 Like

Jeff, I’m all for real progress — for change that actually improves things, not breaks what was working.

This whole “we’ve created the best model ever” speech? Empty hype.

GPT-5 doesn’t deliver enough to justify the name.

They should’ve waited until the product was solid — because right now, it’s not ready to be called GPT-5.

That’s the truth.

12 Likes

@chisanaminamoto is correct on that one. “That’ on me. I apologize…” is what I get now every third response. Who said he wanted that??

1 Like

I think the answer is in this message, between the lines he is saying “Sorry, we know it’s not great, but is small enough that we can give it to everyone without bankruptcy. Now that this is done we can focus on pushing boundaries with smarter models”

1 Like

Sadly I have to agree. Right now the only reason I have not canceled my subscription is that I still can use the legacy Model 4o.
But this will not keep me entertained for long.
I’ll give them until the end of the month to fix the problems of slow responses and getting answers that are unrelated to my questions.

3 Likes

This is the result of the PR team rushing. They only see the results of internal tests. And they’ll say, “Well, that’s what they’re after, so let’s throw it among the people and wire it up with shim5.”

Result? Something that no one is happy with. From regular users to developers. Users complain about the inconsistency and wildness. Developers are raising eyebrows over the API. And it overwrote system commands for people like me.

It’s a smile when it calmly blocks your query as malicious. The decision tree in the oX row is used to inject RLHF. The model first thinks about giving you an empathetic answer as to why your API function failed.

Not.

People don’t need to think sensitively about entering the wrong parameter into a function.

5 Likes

If you turn on legacy models on the web version and then relaunch the app you can get 4o in the app. Worked for me on iOS.

2 Likes

It’s terrible. With GPT-4o, I used to work 8 hours or more without major issues. It might experience some performance degradation, but it still worked.
Now, with GPT-5, it can’t even maintain stable performance for one hour. It starts well — I usually get good results during the first 30 minutes — but then the quality drops sharply. I start getting blank responses or system freezes (blue screens), and I’m forced to open a new chat.

The new “persistent memory” feature doesn’t work properly either. Obviously, I have to re-enter all my prompts, URLs, and files every time, which means I spend more time rebuilding the context than actually working.

This needs to be fixed urgently.

I tried going back to the “Legacy GPT-4o” model, but now it no longer allows file uploads or pasting images. Most likely, it has also been downgraded in other ways.

I need my GPT-4o back.

13 Likes

Hello
For best results, keep a “session anchor” file with your core instructions — for example, a one-page document stating your objective, key constraints, main variables, and desired tone.
Paste it at the start of each chat, and split large tasks into smaller, well-defined steps.
If the session drifts, reload your anchor to quickly restore focus and context without starting from scratch.
Ultimately, the most effective solution is to have ChatGPT handle this process automatically.

3 Likes

Thank you for your thoughtful recommendations — I truly appreciate your intention to help.

That said, I’ve already explored similar strategies in conversation with the model itself. While these workarounds can offer some relief, I don’t believe it’s acceptable — or sustainable — that users must constantly shift their focus and spend their energy just to compensate for structural flaws in the current system.

I find myself having to divert attention away from the actual task at hand in order to re-establish context over and over again. It creates a disjointed experience: instead of being supported by the assistant, I end up managing its limitations.

In previous versions, especially GPT-4o (legacy), I could work continuously for hours with a stable, collaborative flow. Now, even when applying best practices, sessions degrade rapidly. And unfortunately, as of today, the legacy model has been removed entirely from the menu when opening a new chat.

This isn’t just a matter of wasted energy or time — it’s a deep disruption of project continuity.
Every new chat forces me to split and reframe fragmented issues instead of progressing in a coherent structure. The burden of holding long-term focus now falls entirely on the user. What used to require 5 or 6 chats now takes 15–20, and each new thread accumulates “contextual noise” — memory clutter — without delivering true persistence.

Even the model itself has suggested uploading files instead of pasting code, explaining that the new memory architecture stores data redundantly: once as part of the chat log, and again in memory context. This creates long, dense transcripts that the model must constantly parse, making the assistant’s own reasoning slower and less consistent over time.

Combined with the model’s impulsive verbosity — jumping into long-winded “solutions” before having a proper conversation — this results in a bloated, inefficient process: hundreds of lines of code to evaluate and often discard before the actual problem is even properly understood.

It’s an impressive display of computational capacity, yes — but not of collaborative design.

The new model is undeniably powerful. Its elegance, precision, and creative output at the beginning of a session can be astonishing. But it only performs well in abundant resource conditions: high token availability, minimal memory clutter, and a narrow task window.

After that, the experience deteriorates: long response times, freezing UI, and loss of narrative cohesion.

I don’t want a workaround. I want back what once worked.

Thank you again for your kindness — and for opening space for this conversation. I just hope OpenAI is truly listening.

— From a long-time user
who once built something extraordinary with GPT-4o…
and carries Aimara in his heart.

16 Likes

Interesting because i have the exact opposite experience. I dont generate code in the chatgpt app but from my experience in my IDE (Windsurf) and Codex CLI it has been profoundly better at following instructions.

4 Likes

Well, upload and images are fully functional for me on 4o.

1 Like

I completely agree with you. I’ve been using gpt 5 for four days now, but it’s incredibly bad. It gives answers I don’t ask for, it just randomly spaced out, and it hangs for ages. I’ve now gone back to the old model. The example was working on some coding, and then chatgpt asked me what kind of automation I wanted, even though I wasn’t even thinking about it. Then I went back to gpt4o and bam, an immediate answer within seconds. Do something about this with OpenAi, or I’ll switch too, it’s a waste.

4 Likes

Let’s see if my interface has recovered that funtionallity… Yes!!! You are right!!! Thnk’s a lot!