GPT-4 is getting worse and worse every single update

Something interesting to note here.

The previous time an influx of “ChatGPT fell off” posts happened the general consensus was “it’s forgetful”. A result of maximum token window issues? I think at the time it was 4k max? Can’t remember. I was under the impression that it was using some very cool token management but there was a post recently saying it was/is just a simple truncation.

Now, interestingly the second influx of issues revolves around “laziness”. A result of the maximum window being too long? GPT can remember if explicitly asked to recall it, but skips over it otherwise?

I genuinely would like to know why we went from 16k to 128k… LOL who tf asked for this? I feel like an old man yelling at the cloud but… I remember in my day looking at the 32k model and being like “yeah cool I guess, has it’s niche uses” :older_man:

I’m assuming that’s what ChatGPT uses tbf. Not sure if it does. OpenAI doesn’t tell us 'nuffin. I think any sort of long-time OpenAI supporter is low-key masochistic.

To those having issues

Try to create an “encapsulated summary” of the current objective. Use this to create a Custom GPT. Then, for each task, ask it to perform the task, delete the conversation, and start again.

I think the reason you are finding “laziness” in ChatGPT is because the conversation has become very long.

Honestly. This has been one of my top rules for using ChatGPT. Always keep conversations to a MINIMUM. As soon as the conversation hits a tangent, delete and start fresh.

5 Likes