GPT4 is awful as of 2024, significant degradation in coding assistance

I’ve been a plus subscriber ever since the beta preview rollout, and I’ve gotten a lot of value out of my subscription over that time.

Recently, as in the last couple of weeks to months, while using it in the exact same manners as I always have (i’m a web developer and typically like to pass code back and forth with the GPT4 model), I have noticed an incredibly frustrating number of inconsistencies and even what seems to be a complete shift in the way it responds to my requests.

When sending even one line of code and asking it to correct it, I now get more than 6-7 full in-depth paragraphs of text in response, without a single line of code included in the response. When asking it to stop typing so much and just give me the corrected code, it still talks it’s head off and doesn’t give me the adjusted code.

Due to these recent changes, it’s rendered the GPT4 subscription completely useless and valueless for me.

It’s also noticably faster, and in general lesser quality, feeling like I’m using the gpt 3.5 model 100% of the time.

Lastly, roughly 20% of the new conversations created, result in a spanish summary/title. Often times these spanish titles are in all caps, and don’t seem to relate in any way to the content of the conversation.

I’m new here in the forums and admittedly haven’t done much due diligence. I just need some context here – have there been significant changes lately? And why? Why is this happening and is it going to be fixed anytime soon? Like I said, this has rendered GPT4 almost completely useless for me and my use cases. I think tomorrow I will be cancelling my subscription and looking into the new Google Gemini Advanced subscription instead.


I noticed the same thing and I think I am going to also cancel my subscription and start paying for Liner AI. It has GPT-4 built in w/ some extras, like an extension when you go on websites.

I have had similar issues lately… I am a teams user. However I think it was getting better and better until about a few days ago… Now it is terrible. I have never liked Gemini but that too gave horrible answers just since a few days ago…

Asking about a React code-bug and getting this as answer, multiple times: “Elections ar a complex topic with fast-changing information. To make sure you have the latest and most accurate information, try google search.” :sweat_smile:

Reverting to ChatGPT 3.5 until issues have been fixed.

Over the past year ive noticed it slowly get worse and worse (i use it as a starting point for essay writing, editing, and research). At first it felt like it was getting worse because of new computational conplications associated with maintaining ethical AI.

i noticed another backwards leap in its abilities around september/october when its ability to do online research started to go downhill.

Now in 2024, ive noticed that its really bad at doing google searchs, and it will no longer will do exactly what i ask it (often omitting many points).

I think that these more recent probelms stem from advancments in the ability of the AI to summarize as well, i think the large amount of chatgpt content on the web has started to influence what the model ‘thinks’ its output should look like (so you end up with fluffy language as opposed to real content)

If the model doesnt improve by june/july i will be cancelling my subscription as it is no longer valuble, ive started to have to revert to googling questions and fully constructing my essays by hand as it omits MANY points when i try to get it to merge sections for me. In the past i could tell it to be very detailed and exhaustive but the words mean nothing to the model anymore.

1 Like