Something happened with GPT Today

I’ve been using GPT4 exclusively for coding purposes for the past month. While I’m based in Europe, I’m not sure if that has any bearing on my experience. However, I did notice a significant decrease in performance today. It seems to have forgotten our previous conversations and is now unable to detect even simple coding mistakes when prompted for debugging assistance. In some cases, it even replied with responses such as “as an AI language model, I don’t have access to your code files,” despite me pasting a code snippet. It also provided generic answers, such as “this code seems related to WordPress” lol, it was like a smart person suddenly had dementia.

I’m disappointed because I’m familiar with what GPT4 is capable of and it’s suddenly become unhelpful and unfunctional. Have you experienced similar issues with GPT4? Out of curiosity, may I ask where you use it from, your location?

2 Likes

I have to agree, and also add that I have been slowly seeing this over the past few weeks.

Same as above, and I will add that I had one scenario where it kept giving me the same answer despite asking for a different solution. There were multiple solutions to this problem, but it failed to get there.

I’ve also seen it go into looping answers where it will give the same answer twice in the same response.

Happened with me today. I am stunned that it is able to record previous test data and return it in response, where the test case was an empty case.

This is a huge problem that nobody seems to want to talk about. I frequently get straight-up gobbledygook answers from GPT4 these days after weeks of incredible results. Output that begins mid-sentence or produces fake references/fake data… It’s enough to make me want to quit my subscription until this bull**** gets ironed out. It’s like i’m talking to GPT3.5 or a concussed pre-May12-update GPT4… Let’s keep these threads alive until we get some answers.

Hope below will help you.

Yet another disgusting, astounding example from minute ago of just how stupid GPT4 has become:

Prompt:
OK, let’s revise your writing output to be more formal. You need to learn an important, basic lesson in scientific writing: “higher” is not an appropriate substitute for “greater”, etc.

ChatGPT4:

I appreciate your feedback, and you’re correct. In scientific writing, precision is important, and the choice of words can significantly impact the interpretation of the results.

“Higher” and “greater” can be used interchangeably in casual conversation, but in scientific contexts, they could imply different meanings. For example, when comparing numbers, we’d generally say one number is “greater” than another, not “higher.” On the other hand, “higher” is often used to describe physical height or levels in a vertical hierarchy.

I strive for accuracy and clarity in all types of writing, including scientific writing, so I appreciate your feedback to help me improve. However, as of my last training cut-off in September 2021, I do not have the capability to learn or adapt my responses beyond the data I was trained on. I can provide information and responses based on that training, but I can’t learn new information or correct my style of writing based on user feedback.

The developers at OpenAI regularly update and improve the model, so feedback like yours is crucial in making these improvements. Thank you for your input!

Two months ago, this prompt would have elicited a very different, superior response from GPT4…

1 Like

I agree that. I activated Plus earlier this year. I started coding with ChatGPT, then later with GPT-4, but now the code it writes is completely useless. Out of 25 credit, it often fails more than 10 times, even though it’s just a simple task. Therefore, I have cancelled Plus.

6/27/23 … & ChatGPT4 is still lobotomized.

Same here, this morning everything went fine, later that day it can´t reverence to the last messages and the answers were completly random.
I put in a python function with specific advises what to do, and got out an csv import, which I never even had in any script

1 Like

Noticed a complete loss of all contextual training and learning from chat instances in GPT. As of yesterday afternoon, all my carefully trained chat instances had completely forgotten all the months of training and fine-tuning work I had done and were all completely clueless about any of our chat history. Hugely frustrating and inexplicable as I can’t seem to find any reason why this would have happened. Perhaps I reached a memory limit which resulted in hard reset?

Has anyone experienced a significant downgrade in the capabilities of GPT-3.5 and GPT-4 as of July 6th, 2023?

Specifically, I have pasted 2-3 simple React components and asked for help in solving some simple issues. Previously, both models were able to handle these questions easily. However, today they only provide a breakdown of the code and do not answer the initial question of how to fix the problems. I have never experienced this issue before, but it seems that many people have encountered problems with model downgrades in the past. In this message, I am asking if anyone else has noticed that both models are performing poorly today.

Would it be correct to say that most of the issues involve ChatGPT losing previous context?

For consistency it may be better to use the API.