I’ve been a plus subscriber ever since the beta preview rollout, and I’ve gotten a lot of value out of my subscription over that time.
Recently, as in the last couple of weeks to months, while using it in the exact same manners as I always have (i’m a web developer and typically like to pass code back and forth with the GPT4 model), I have noticed an incredibly frustrating number of inconsistencies and even what seems to be a complete shift in the way it responds to my requests.
When sending even one line of code and asking it to correct it, I now get more than 6-7 full in-depth paragraphs of text in response, without a single line of code included in the response. When asking it to stop typing so much and just give me the corrected code, it still talks it’s head off and doesn’t give me the adjusted code.
Due to these recent changes, it’s rendered the GPT4 subscription completely useless and valueless for me.
It’s also noticably faster, and in general lesser quality, feeling like I’m using the gpt 3.5 model 100% of the time.
Lastly, roughly 20% of the new conversations created, result in a spanish summary/title. Often times these spanish titles are in all caps, and don’t seem to relate in any way to the content of the conversation.
I’m new here in the forums and admittedly haven’t done much due diligence. I just need some context here – have there been significant changes lately? And why? Why is this happening and is it going to be fixed anytime soon? Like I said, this has rendered GPT4 almost completely useless for me and my use cases. I think tomorrow I will be cancelling my subscription and looking into the new Google Gemini Advanced subscription instead.
I noticed the same thing and I think I am going to also cancel my subscription and start paying for Liner AI. It has GPT-4 built in w/ some extras, like an extension when you go on websites.
I have had similar issues lately… I am a teams user. However I think it was getting better and better until about a few days ago… Now it is terrible. I have never liked Gemini but that too gave horrible answers just since a few days ago…
Asking about a React code-bug and getting this as answer, multiple times: “Elections ar a complex topic with fast-changing information. To make sure you have the latest and most accurate information, try google search.”
Reverting to ChatGPT 3.5 until issues have been fixed.
Over the past year ive noticed it slowly get worse and worse (i use it as a starting point for essay writing, editing, and research). At first it felt like it was getting worse because of new computational conplications associated with maintaining ethical AI.
i noticed another backwards leap in its abilities around september/october when its ability to do online research started to go downhill.
Now in 2024, ive noticed that its really bad at doing google searchs, and it will no longer will do exactly what i ask it (often omitting many points).
I think that these more recent probelms stem from advancments in the ability of the AI to summarize as well, i think the large amount of chatgpt content on the web has started to influence what the model ‘thinks’ its output should look like (so you end up with fluffy language as opposed to real content)
If the model doesnt improve by june/july i will be cancelling my subscription as it is no longer valuble, ive started to have to revert to googling questions and fully constructing my essays by hand as it omits MANY points when i try to get it to merge sections for me. In the past i could tell it to be very detailed and exhaustive but the words mean nothing to the model anymore.
I thought I was the only one experiencing this but recently the quality of output has degraded to a pulp. I’m actually shocked at the difference. If I were to pinpoint the moment - it was Feb 21 when there was an incident around " Unexpected responses". While that specific incident issue has been fixed it feels it has been dumbed down. Is it deliberate? Is it Sora? All focus is on it now?
It has been such a great tool. It would be awful if it doesn’t get back on track. Sharing my two cents so others who are experiencing this can chime in and hopefully Open AI is taking notice and doing something about it!
Yes, same here, I noticed the quality of answers with regard to coding questions has seriously degraded. I thought is was me at first, then did a quick google and found this discussion.
Why would they do this? I’m confounded, I thought AI was going to improve over time, but instead it’s gone backwards.
For me too, its almost unusable now, I feel the value of paid subscription is very low, I hope some competition comes up, they don’t care for their users.
I’ve experienced similar issues. Have been a member for almost 2 years now and was able to receive the value out of the membership, however, the lack of useful answers lately has been very frustrating. Its repeating itself, even when I prompt to stop repeating or giving the same answers. The amount of long text paragraphs with almost no valuable information is out of control. Contemplating cancelling if no major improvements are implemented by OpenAI as its simply not working anymore. Wonder what the company did to impact its performance this much and if they’re aware of the frustrated subscribers.
It was such a valuable tool before. Over time, the coding assistance became super fast, yet also super unhelpful, to the point where I would call it useless.