How to deal with "lazy" GPT 4

Usually signing out and in manually helps with this. Looks like an authentication error.

Does this sound familiar?

Just adding another voice to the chorus that Claude has now become my go-to for most tasks. The great thing is that it’s super easy to support them all thanks to langchain. So OpenAI, get your act together - others have gotten better while you’ve only gotten worse.

1 Like

Yep, this is hilarious. I wonder if this is why Claude has been getting the overloaded issue so frequently. Literally everyone is realizing how shit gpt-4 has become.

There are still some tasks that gpt-4 handles better than Claude Opus. But overall, I’m pretty damn satisfied with Opus. It’s a great model. I’ve found it relies more on “tried and true” methods as opposed to going outside the box or deriving its own logic for things, but nonetheless, it’s FAR more usable than gpt-4 right now. I try swapping between the models and gpt-4 pisses me off on the very first response almost invariably.

I don’t know what in the world OpenAI is doing or why there is no official word on this. Either they don’t know, don’t care, or don’t even use their own service enough to know it’s becoming utter trash, in my opinion anyway, compared to what it used to be.

2 Likes

You can’t format it in HTML, small texts, it stops in the middle.

Turn image into text? I’ve never been able to do it again, I haven’t been able to do it for about two months.

There is definitely a lack of processing, there is a lack of money for this broken company. Openai is literally covering it.

It’s not that the quality of the text has worsened, it simply doesn’t perform the functions it used to.

He also doesn’t do intelligent research on the internet anymore, chagpt is rubbish today.

Yes it’s an embarrassment now. All it’s good at is saying “Do this do that” and not actually doing it.

I wish OpenAI had faster customer support… It’s not only the GPT4… but when I encounter error (unable to load …they made fast move to fix it… Well, that’s the downside outsourcing production system outside…

The worst are the shills downvoting all negative feedback. If there are so many threads about this then something must be wrong. Anyway, I’ve already moved to Claude.

https://reddit.com/r/ChatGPT/comments/1by23zu/every_thread_about_how_dumblazyuseless_chatgpt_is/

I’m about one week into Claude Opus and it’s fantastic. For some reason, gpt-4 handles certain logic better, but in 95% of cases, if I’m having trouble with Opus and decide to try gpt-4, I get pissed immediately at the dumb reply I get back. Something is really, truly wrong with the quality right now, at least for coding tasks.

Writing seems to be decent still but I’m about to switch my commercial apps to Claude as well, just because I do 100% notice a decline in the quality of my gpt-4 outputs, particularly in regards to following system or user instructions. That is completely shot. It’s as if the context window of “instructions” is vastly reduced or otherwise screwed up in some way.

1 Like

Do you have any examples top of mind where you feel gpt-4 is better than opus?

It’s rather hard to describe because I don’t fully understand it.

One example, I am writing CSS by hand for a project and, for some reason, Opus is very bad at the “cascading” part of CSS and struggles with understanding how to relate the CSS to the DOM. If I let it go unchecked, it will turn my various pages or components into a frankenstein monster where they all have custom CSS for elements which could simply be put into a div container or something.

So I decided to send the same thing to gpt-4, same user message, and same context, and it immediately knew, all right well this CSS should go into the root html, whereas now this page will get custom CSS for this one element which differs, etc. Admittedly I didn’t prime the context for this as I didn’t think I needed to. but gpt-4 knew what to do without me explicitly telling it to. So perhaps it’s a difference in training data, I’m really not sure.

It’s not the only thing I’ve come across either, it’s just what I was dealing with recently.

EDIT: The funny part is that Opus creates FAR better looking CSS. That is another thing I’ve tested a few times back-and-forth, using the same user message and context. So now I just tell it where the CSS needs to go explicitly, and Opus does fine.

1 Like

Very sad

For comparison this is Gemini:

Basically, chatGPT 4/GPT 4 is better at formatting-related language tasks, including the CSS and how it relates to DOM. However, for complex coding tasks and logics, Opus is 90% the winner.
Due to the suffocating policies in chatGPT 4, it might misinterpret your prompts and downright refuse them and make it ‘red’ even if it’s totally harmless. Sometimes by using certain keywords, you can activate this self-defense mechanism and you can’t do anything about it…

1 Like