GPT-4 is getting worse and worse every single update

so i’ve been using the web gpt-4 model fairly consistently since its release and i have to say its gotten so bad i no longer want to use it, im back to google.

every single release they made it dumber, but this last one has made it pointless for me. the answers aren’t as intelligent, the code has more problems, and its ability to remember the conversation is gone. I have to keep repeating facts from literally 1 or 2 short messages back.

its such a shame to see something so useful be turned into such mediocre garbage.

but they have to cut the price as they’re burning through cash, and they have to censor the product as they’re a big corporate now pretending to be a startup.

anyway, it was a fun 6 months whilst it lasted, hopefully some open source models will catch up in quality soon enough.


I have noticed the same thing. It even misspells my words when replying to me. It has worse retention than 3.5 had before 4 came out. It gives wrong answers all the time or if I say something is wrong, it’s going by what I say even if what I am saying is made up.

It’s the worst it’s ever been.


I’ve noticed the same thing, the response quality has rapidly gone down. When I ask it technical questions it now likes to give numbered list of generalized points, instead of just understanding my intent and responding accordingly as a person would. I have to waste extra prompts to make it understand I want a more direct and engaging response


same here. GPT-4 is good, but not performant…

First of all, it annoys me the most when I write to him in the language of my country (romanian), and he answers me in English, as if he does not recognize the language in which I speak.

On the other hand, I give him the necessary instructions, teach him everything he needs, but he completely forgets the conversations in the same chat. And he often doesn’t understand simple things. And I have to repeat them again, and again. Every time I’m starting over. I explained very simply, very clearly, how I want him to write a text, according to my way of expression. I gave him examples, I offered him a wide set of instructions, very complex, but ChatGPT still only expresses itself the way it wants, that is, only 45% of the time it keeps my way of expression. I struggled for a week to keep explaining to him, step by step, but in vain…

CgatGPT should be able to remember not only the issues discussed in a single chat, but also the rest of the issues in other chats, if all the chats belong to a certain user.

Then, I have about 60 conversations, each quite consistent, and it’s a bit difficult to load the conversations. And if I open a new chat and tell him to remember what we discussed a few days ago, he simply doesn’t know. And again I have to give him prompts, and I have to start over, and I have to explain everything to him.

ChatGPT should remember all the conversations of each individual user, so that user does not repeat the instructions with each new chat.

And, of course, I gave him instructions to google some simple things, which he doesn’t know, but he doesn’t want to google. For example, there are manuals for household machines, I gave him a link to the manual, both in pdf and txt format, but he can’t read it.

For weak answers, I showed him how to search on Google in order to give clearly qualitative answers, but I showed him in vain. He only does as he pleases…


Yes i have noticed that too and for coding tasks it getting much worse.

I have noticed that now for code generation ChatGPT never gives full source code and often left with placeholder saying fill your own code now and doesn’t even convert or translate any projects.

For any project to translate it says due to current limitations of system and complexity i cannot convert the project i can only guide you to convert this.

Which is totally unhelpful since i don’t need guidelines or help i need my project migrated to other language.


He just ignores your instructions like a boss lol :joy: .
Yea i had noticed too i had given him instructions to not give me incomplete code and give me full source code always but still it gives incomplete code with placeholders :unamused:


I recently been relying on GPT3.5 instead of GPT4. GPT4 answers are now ridiculously long, and sometime, just wrong, or non-sense. It now lose track of what was discussed previously. Worse, GPT 4 is a model that change over time. This is terrible for GPT based apps. I am starting to think about cancelling my subscription.


Thank you I feel validated finally. I had a hard time finding anyone experiencing this as well. It was so good and now I feel like I am arguing with a stubborn toddler. It costs so much more time to get anything useful out of it. Such a bummer, it had a great run but it’s been trending down for too long now for me to be hopeful it will reverse anytime soon.
Edit: I use it to help with Python and C/C++.


I notice this too but how can this be when they said ChatGPT 4 Turbo was supposed to be an improvement in quality, not a reduction? What was the point?


I was searching the internet to see if more and more people are experiencing problems with ChatGPT 4.0. Reading this confirms exactly what is happening.

Coding in ChatGPT 4.0 gets harder and harder each week. Here’s what I notice:

  1. The model seems to no longer remember the history of the conversation.

  2. The history isn’t even fully stored anymore; entire parts are missing.

  3. Code that was created, containing errors, is suddenly truncated or completely altered.

  4. I have ChatGPT generate the prompts myself to avoid these errors, but this doesn’t help at all. It goes wrong every time.

  5. The intelligence of the coding is getting worse and worse. Even small mistakes keep recurring, and the model can no longer resolve them.

  6. The model always wants to divide code into many pieces, so you yourself are left spending a huge amount of time cutting and pasting, and new errors arise.

  7. Even stating in every task that you want to see the complete script only works if you repeat this several times.

  8. It seems like the model wants to save CPU power, and there’s a cutback on lines of code, but this is completely counterproductive.

  9. Today, I spent hours on a simple script that ChatGPT couldn’t even solve.

  10. The same mistakes keep happening over and over again, and it’s like mopping the floor with the tap running while trying to code this way.

  11. I hear nothing from the company that these problems are getting worse and worse. It seems no one cares, but the people who are coding are going crazy.

I hope this can be resolved quickly, but since it has been this way for so long and is only getting worse, I don’t believe in it. It’s a pity we can’t report this directly and be heard.

What solutions are there? Using the API, but that’s not so very user-friendly.


You took the words right out of my mouth! This is exactly how it is.
Coding used to be fun and inspiring. Now, it’s a major frustration. It seems like they’re saving CPU power by providing short answers, no longer remembering history, etc.

Then there’s Bard. It’s completely unusable. Can’t code, mixes things up, and just tells complete nonsense! It’s going to take years before anything useful comes out of that. And ChatGPT continues to slide further and further downhill, and so people will drop off. Maybe that’s the intention too. The subscription for people who use it sparingly and have simple tasks.

I am deeply disappointed in ChatGPT 4.0 and the worst part is that it feels like we’re talking into the void. No one is interested in your problems with the model. Yes, even the model itself.

It was fun for six months and now it’s becoming a very frustrating experience.
I can’t help but think that they’re massively saving on CPU power.
More and more people want to use it and expansion is costly.
Then there are the internal problems at the company. I’m curious to see how this will end, but unfortunately, there is no alternative because everything is based on ChatGPT. What is based on Google’s model, you can forget altogether.
Strength to everyone. We thought we had something beautiful, but it was nothing more than a puff of wind.


I have to say it’s absolutely terrible now. I’ve used GPT3.5 to generate some SQL queries and it was very impressive so I’ve decided to buy GPT4 assuming it will be even better.
Now I just want it to generate a simple SQL query (compare output from one query to the list of the values to see which items are missing) and it was not able to write a working query in 10 !!! Attempts. This is just a joke - it keeps generating wrong answers again and again. This is not a free feature, actually you have to pay for this and quality is just a joke now.


I would very much like to hear from the developers of ChatGPT about what exactly is going on. The quality has been on a declining trajectory from what it was. Just a few months ago, it was almost a dream to work with, but now I’m experiencing immense frustration. If you have to repeat the same command 4 times before it is executed, there is definitely something wrong.

I do understand a few things. It’s getting busier at OpenAI. More and more special versions of ChatGPT are being released, and you can also create your own customized ChatGPT. I’ve done that, by the way, but it makes absolutely no difference compared to the standard ChatGPT 4.0.
The number of users can only increase, so all this has to be processed with the same infrastructure. So, less computing power for everyone and shorter history to free up space for even more users.
This sounds logical, and it wouldn’t surprise me if this is the case, but to be clear, it’s just a hypothesis.

Users with simple queries may not notice this problem, but users with more complex tasks will notice it all the more.

Once again, I hope that something is done about it, and if not, that we at least know what we’re dealing with. It’s a tremendous shame to see such a good product decline like this. Many users will leave without saying anything, which is why feedback such as this should really be listened to. Just my idea, of course!


They essentially have, from what I’ve heard, a middleware model (perhaps 3.5) that serves as an entry point, receiving the prompt, simplifying it, and then sending it to GPT 4.

However, I agree that it is gradually waning. Particularly after the release of the GPTs, both 4 and 3.5 have become incredibly stupid. Complete disregard is given to prior instructions and context. It cannot respond to a basic email, not even when given explicit instructions on what to say. Asking to generate Code is basically useless you won’t get anything usable.


It’s becoming incredibly frustrating to use for coding (python, go). Went from good experience and really helpful to pulling my hair. It completely ignores half the instructions now, can’t remember anything, and in general has gotten buggier and buggier with each update.

Is there a way to choose which 4.x version you want to use? The first 4.x one I used (January?) was so much better than 3.5, but now I often have to switch to 3.5 to get a half decent answer. Emphasis on half.

Edit - the Experimental menu shows an option to generate a customized GPT model. Let’s see of I can get something that’s useable again.


Came here because I feel the same.

Today I was thinking of breaking up. :frowning:


Sadly, I am noticing this too. It looks like the giant number of users is causing them to downgrade the power of the model itself. It’s very disheartening for someone like me who has been using the premium service for more than 5 months. Now, even the custom GPTs are reducing our uploaded knowledge. DALL-E was nerfed too… I think I will cancel my premium if they don’t quickly fix this.


I have been very frustrated working with GPT4 today. I wasted over 80 messages today with GPT4 replying nearly identical non-working code despite my very detailed instructions about what it needs to fix. I’ve never experienced it perform so poorly since it was released.


GPT4 is consistently giving identical answers to the code I provided to it, even though I’m explicitly telling it what is wrong. There is something seriously wrong with the service at the moment. This could be related to the new models or updated training data, I’m not sure. The whole platform is painfully opaque. I’m only complaining now because it seems fundamentally incapable of working in the context I’ve been using it for so long. At this point, it’s faster just to write the code myself. I was not expecting it to get worse over time.

GPT3.5 is performing better than GPT4 at coding tasks right now. Something must be wrong.

A good solution for now is to use the deprecated models (0314 and 0618) via the Playground/API. Please fix this OpenAI!


Code quality has decreased substantially, to the point that I am now considering if it is worth using this tool or not.
I have wasted more time telling GPT4 what to do and then telling it to fix a number of issues in the generated code, plus wrong assumptions, etc, and then also telling it to stop writing comments like “// here goes the code that you need to write …” lols I mean, I am asking it to write the code, not to write a dumb stub for me. Yeah, it somehow gets the idea of what I need, but it now feels it gives you half an answer.
It also now tends to forgets the context after 3 or more prompts rendering it quite useless at times, or at least more difficult to work with than it used to be.