Last few days seems like gpt-4 downgraded pretty much

I have been using GPT-4 for a couple of years and it has never been worse. I give some things to analyze with a few questions and instead of getting the answer, I get a brief review in 1 paragraph. The output is much faster, but who cares if it is garbage?
Sometimes 3.5 gives better answers. I do not know what happened, but that just got so much worse.
Seems like it is since the ‘Memory’ thing is introduced. Sometimes I ask narrowing questions in the conversation and it just starts answering using som previous conversations.

1 Like

I don’t know what you’re using it for, but in terms of creative writing, it’s actually been much better in the last 2 days than it has been for months, mainly helped by the fact that it no longer tries to keep its output under 1000 tokens.

Hmm, interesting. I mostly use it for coding.

I had the exact same experience with coding or discussing computational papers. It provides such bad answers that I even stopped using its code at all. It has been an excellent help for the last year, but after the May upgrade to gpt 4o, things got worse. I tried gpt 4 and gpt 4o, the answers are so bad that I started using Lamy 3 with Perplexity as an alternative. Today gpt 4 even erroneously copied my 10 lines of code that crushed tensor calculations or contained some random matrix deep in the code, even though I explicitly said to adjust the code I wrote. It took me a total of 30 minutes to fix it, because I didn’t think gpt 4 would make such a ridiculous mistake.