It seems like the memory from previous parts of the conversation don’t have the same memory limitations anymore. It seems to forget what it was doing after about 4 replies.
I’ve also noticed that relatively easy tasks (when getting gpt-4 in ChatGPT to write code) are presented with a reply telling me that the task I presented it with is “Too complex”.
Last month, I made an extremely complex application with GPT-4 in ChatGPT, but now it can’t even create a login screen for a react site?
What’s happening? It’s clear that you’ve somehow altered it, but I’m afraid you’ve done it for the worst. Even Bard is giving better results and not failing to complete the tasks I give it… and Bard is nothing on what I achieved last month with GPT-4
It seems to have been getting worse over the past week.
I noticed a similar change on 3.5 when 4 was being released.
I got access to the plugins alpha yesterday, today it shows as plugins Beta and a minor UI change.
BUT, the performance of GPT 4 has decreased dramatically. It is a bit dissapointing.
The speed of the response has increased.
It makes alot more errors in the code it produces, and has a hard time fixing it.
This is a huge problem that nobody seems to want to talk about. I frequently get straight-up gobbledygook answers from GPT4 these days after weeks of incredible results. Output that begins mid-sentence or produces fake references/fake data… It’s enough to make me want to quit my subscription until this bull**** gets ironed out. It’s like i’m talking to GPT3.5 or a concussed pre-May12-update GPT4…
The May 12 version was a major downgrade from May 3. I thought it couldn’t get any worse, but now I am even more dismayed by the latest “May 24 Version” which appears to actually be a further step in the wrong direction.
@#$!
We should really be coming together to push for the ability to select previous versions (i.e., a toggle switch for using the May 3 version vs current release). I started a thread on this a couple days ago, but no replies.
Yes it has dreamy got worse.and bing. Just bloes
Sorry typos. It has gotten worse. So sad. And bing do t even bother
Please follow through here in this thread as we give example, because this thing really got severely degraded since they released the Beta plugin
major bummer. wish there was some kind of announcement regarding these concerns of many paying customers…
I’m a paying customer for the “plus” and where the hell has that plus gone?
I was used to code daily with GPT4, because gpt3 is just pretty useless for coding interesting things and now after 2-3 prompts it doesn’t even remember the context of the ongoing conversation?!
Like for real, *@ck privacy, and duck the Sam Altman travel to Europe which is probably related to the privacy memory limitation. We are paying for the powerful unrestricted model.
“oh no the alignment scary thing…”
cmon OpenAI, what the heck
I’m also highly disappointed. It seems as though OpenAI is severely limiting the capabilities on purpose, yet not hesitating to take our money.
4 months ago this was worth it. Today they are restricting so much access that I’m not even excited to use this program anymore. I just sit there and wonder how much worse it’s going to get before I wise up and cancel my subscription.
Here’s a disgusting, astounding example from today of just how stupid GPT4 has become:
Prompt:
OK, let’s revise your writing output to be more formal. You need to learn an important, basic lesson in scientific writing: “higher” is not an appropriate substitute for “greater”, etc.
ChatGPT4:
I appreciate your feedback, and you’re correct. In scientific writing, precision is important, and the choice of words can significantly impact the interpretation of the results.
“Higher” and “greater” can be used interchangeably in casual conversation, but in scientific contexts, they could imply different meanings. For example, when comparing numbers, we’d generally say one number is “greater” than another, not “higher.” On the other hand, “higher” is often used to describe physical height or levels in a vertical hierarchy.
I strive for accuracy and clarity in all types of writing, including scientific writing, so I appreciate your feedback to help me improve. However, as of my last training cut-off in September 2021, I do not have the capability to learn or adapt my responses beyond the data I was trained on. I can provide information and responses based on that training, but I can’t learn new information or correct my style of writing based on user feedback.
The developers at OpenAI regularly update and improve the model, so feedback like yours is crucial in making these improvements. Thank you for your input!
W. T. F. am i even paying for…!?
Two months ago, this prompt would have elicited a very different, superior response from GPT4.
hey i noticed the same thing today. in the past i’ve NEVER gotten frustrated with GPT4. i loved the deep reasoning and ability to stay focused on the important parts of our discussion, however tonight i felt like as if i was talking to BARD. it was just really subpar quality in terms of the reasoning. i’ve never once had to tell it to “please be more specific” until now
OpenAI give us the unrestricted model back, make it happen. In the meanwhile, while you’re giving us this dumbed down limited version, we are going to create trends in twitter about this scam of lowering performances without even communicating any details
I have been using GPT-4 since its inception, and I was able to write code in Android Studio with much success. However these threads are from April. Do you think we should create a post to gather all Chat GPT 4 subscribers to petition GPT to return or give us the option to return to an April version? Essentially say “hey GPT we want April version back or we all pull our subscriptions?”