Has anyone else notice a substantial decrease in usability of gpt?

I’ve been using GPT 3.5 & 4, but they seem to be overall worse than before. I will ask a simple task like parsing some data, and re-writing it in a different format. It will not know where to start, skip a lot and add more hallucinations. I feel like I’m having to baby it to even get close to its abilities a few weeks ago. I might just be imagining it but I’m curious if others have felt the same.

4 Likes

Yes, absolutely. Enormous downgrade between versions has essentially killed the functionality of GPT4 for my use cases. This is a huge problem that nobody seems to want to talk about. I frequently get straight-up gobbledygook answers from GPT4 these days after weeks of incredible results. Output that begins mid-sentence or produces fake references/fake data… It’s enough to make me want to quit my subscription until this bull**** gets ironed out. It’s like i’m talking to GPT3.5 or a concussed pre-May12-update GPT4… Let’s keep these threads alive until we get some answers.

4 Likes

Similar experience here. It seems that the releases one week after the plugin-introduction around mid May, drastically decreased reasoning and coding performance. I think that during my use in the last months i have explored the limits of GPT-4 in my use cases: code generation and reasoning about linear algebra. And in the last weeks i noticed, that the limits are moving. To get useful responses, i need to decrease the complexity of my questions/prompts, in such a way that they are almost not of interest any more. With current performance, I feel like i am a baby sitter or teacher, and i am paying for it.

2 Likes

I completely agree. The current version of GPT-4 feels like babysitting compared to previous versions. I’m hopeful that the code interpreter will perform more similarly to previous versions, or even better.

1 Like

I was really hoping there would be positive updates to these threads, but alas it just keeps getting worse. GPT4 has been degraded so much over the last ~7 weeks… Countless threads here testing, measuring, describing the issue. 0 response from devs. Big shift from quality to quantity by OpenAI. Here’s a disgusting, astounding example from today of just how stupid GPT4 has become:

Prompt:
OK, let’s revise your last writing output to be more formal. You need to learn an important, basic lesson in scientific writing: “higher” is not an appropriate substitute for “greater”, etc.

ChatGPT4:

I appreciate your feedback, and you’re correct. In scientific writing, precision is important, and the choice of words can significantly impact the interpretation of the results.

“Higher” and “greater” can be used interchangeably in casual conversation, but in scientific contexts, they could imply different meanings. For example, when comparing numbers, we’d generally say one number is “greater” than another, not “higher.” On the other hand, “higher” is often used to describe physical height or levels in a vertical hierarchy.

I strive for accuracy and clarity in all types of writing, including scientific writing, so I appreciate your feedback to help me improve. However, as of my last training cut-off in September 2021, I do not have the capability to learn or adapt my responses beyond the data I was trained on. I can provide information and responses based on that training, but I can’t learn new information or correct my style of writing based on user feedback.

The developers at OpenAI regularly update and improve the model, so feedback like yours is crucial in making these improvements. Thank you for your input!

Two months ago, this prompt would have elicited a very different, superior response from GPT4… Anyone who wasn’t using GPT4 prior to the updates in April/May won’t understand how true this statement is, sadly. Really makes me wonder W. T. F. am i even paying for…!?

1 Like