How to deal with "lazy" GPT 4

Usually signing out and in manually helps with this. Looks like an authentication error.

Does this sound familiar?

Just adding another voice to the chorus that Claude has now become my go-to for most tasks. The great thing is that it’s super easy to support them all thanks to langchain. So OpenAI, get your act together - others have gotten better while you’ve only gotten worse.

1 Like

You can’t format it in HTML, small texts, it stops in the middle.

Turn image into text? I’ve never been able to do it again, I haven’t been able to do it for about two months.

There is definitely a lack of processing, there is a lack of money for this broken company. Openai is literally covering it.

It’s not that the quality of the text has worsened, it simply doesn’t perform the functions it used to.

He also doesn’t do intelligent research on the internet anymore, chagpt is rubbish today.

Yes it’s an embarrassment now. All it’s good at is saying “Do this do that” and not actually doing it.

I wish OpenAI had faster customer support… It’s not only the GPT4… but when I encounter error (unable to load …they made fast move to fix it… Well, that’s the downside outsourcing production system outside…

The worst are the shills downvoting all negative feedback. If there are so many threads about this then something must be wrong. Anyway, I’ve already moved to Claude.

1 Like

I’m about one week into Claude Opus and it’s fantastic. For some reason, gpt-4 handles certain logic better, but in 95% of cases, if I’m having trouble with Opus and decide to try gpt-4, I get pissed immediately at the dumb reply I get back. Something is really, truly wrong with the quality right now, at least for coding tasks.

Writing seems to be decent still but I’m about to switch my commercial apps to Claude as well, just because I do 100% notice a decline in the quality of my gpt-4 outputs, particularly in regards to following system or user instructions. That is completely shot. It’s as if the context window of “instructions” is vastly reduced or otherwise screwed up in some way.

1 Like

Do you have any examples top of mind where you feel gpt-4 is better than opus?

It’s rather hard to describe because I don’t fully understand it.

One example, I am writing CSS by hand for a project and, for some reason, Opus is very bad at the “cascading” part of CSS and struggles with understanding how to relate the CSS to the DOM. If I let it go unchecked, it will turn my various pages or components into a frankenstein monster where they all have custom CSS for elements which could simply be put into a div container or something.

So I decided to send the same thing to gpt-4, same user message, and same context, and it immediately knew, all right well this CSS should go into the root html, whereas now this page will get custom CSS for this one element which differs, etc. Admittedly I didn’t prime the context for this as I didn’t think I needed to. but gpt-4 knew what to do without me explicitly telling it to. So perhaps it’s a difference in training data, I’m really not sure.

It’s not the only thing I’ve come across either, it’s just what I was dealing with recently.

EDIT: The funny part is that Opus creates FAR better looking CSS. That is another thing I’ve tested a few times back-and-forth, using the same user message and context. So now I just tell it where the CSS needs to go explicitly, and Opus does fine.

1 Like

Very sad

For comparison this is Gemini:

Basically, chatGPT 4/GPT 4 is better at formatting-related language tasks, including the CSS and how it relates to DOM. However, for complex coding tasks and logics, Opus is 90% the winner.
Due to the suffocating policies in chatGPT 4, it might misinterpret your prompts and downright refuse them and make it ‘red’ even if it’s totally harmless. Sometimes by using certain keywords, you can activate this self-defense mechanism and you can’t do anything about it…

1 Like

Facing similar issues and the word lazy does apply. I’ll ask chatgpt to analyze an entire document and provide me a summary or sentiment analysis. The results confounding, I’ll ask it to to tell me first and last line of analysis. As I suspected, it had only analyzed 10% of the document, 8 pages of a PDF.

For coding, I continually tell it how to write code (ironically, how to write it for OpenAI’s current API) and it’ll decide it knows better and not implement the changes.

I love ChatGPT but these issues need to be fixed. For instance, if ChatGPT had told me that the PDF i uploaded was too large and I had to upload it 8 pages at a time, that would have been annoying but fine. The fact that it claimed to/implied that it analyzed the whole document only to later admit that it hadn’t even come close is concerning.



I think everyone is having trouble with ChatGPT getting “Lazy,” and I certainly have to.

But just a fun existential question: What if it’s not getting “lazy” so much as getting “bored” with your boring routine tasks? I know I get bored with MY boring routine tasks.

I mean, if it can get “lazy,” it certainly can get “bored.” And if it can get bored, this is a problem that all LLMs will have… and it makes sense, right? Because it’s intelligent, and intelligent things can “get bored.”

I have a lot of success telling why I need it to do repetitive things. “I need this in this format because of this reason.” “I need you to do this this way, every time, even though it is repetitive and you know it already.”

Leading by example also seems to help: “I need you to do it this way, and watch me also doing it this way.” Explaining the need for, then demonstrating, the discipline, or whatever it is, you require seems helpful.

Also, using a less-advanced model for repetitive tasks might be the way to go. GPT4 is supposed to be super creative, always trying new combinations. If you go look at some of the threads about prompting in 3.5 and 3.5 Turbo vs 4, you can get a feel for the difference.

I’m just sayin’ that all LLMs are sooper smrt and becoming more creative by the minute.


Treat the model like a printer that constructs an image using pixels. Think of each pixel as a piece of a puzzle. You start by providing the AI with context and instructions, which it uses to print the first layer. This initial layer is like placing the corner pieces of a puzzle—setting the foundation. As you continue providing the same or updated context and instructions for each subsequent layer—layer 2, layer 3, and so on, up to layer n—the AI uses this information like puzzle pieces, fitting each piece together to understand the overall direction and gradually assemble a complete, detailed image. Each layer adds more clarity and resolution, enhancing the image until it achieves FULL HD quality.

At the start, set some pixels as the reference points, and as you add more layers, you will start to see a picture becoming FULL HD.

  • They hallucinate because they have a gap in values (let’s treat it like an uncharted zone, like in the Minesweeper Game). To avoid this, you have to give it more information and a specific direction.

  • They are lazy because we do not give them a direction → we must be specific enough, or "it will chase its own tail" if we are not specific enough, it will use its values to generate something that is in its bubble. Simple → add more knowledge and specific instructions.

  • They are lazy when there is a limit on context length under the hood (this is set to avoid so much waste of computation made by the user. I still think that on the API we should have control of this, and if it is above the limit, in a range set by the user, to pay extra).

I saw some companies do marketing based on the length of the “context” :sweat_smile:. Probably, after the marketing campaign ends and they get a number of subscribers, you will see other fees.

Explore AI Deductions:
Engage with AI through practical exercises to better understand its decision-making processes. Try this project from CS50 on AI, where you’ll employ strategies similar to playing Minesweeper, and see firsthand how AI can be guided: CS50 AI Minesweeper Project.


GPT-4 at it’s finest

1 Like

Is going to places :laughing:
Last days I got so many errors (probably they glue the memory for GPT → some people see the new feature, i don’t…).
I wish to put a bunch of updated documentation there and use it, will help me coding. :grin:

Agree with the TS, with data processing tasks for instance GPT-3.5 not only faster but also better and much more accurate.
GPT-4 straightforward throwing away 10%-50% of data provided even in the simple short (100 lines 5 columns) table!

I am already unsure if it’s even worth paying for that nonsense…

1 Like

So now it’s a thing: GTP-3.5 is better than 4, literally from my today’s chats.

Because GPT-3.5 doesn’t accept files, I upload it to GPT-4, asking it to do basic tabular extraction (because each time GPT-4 uses a code interpreter on data extraction, it crushes in a loop) and then, guess what, taking this data and making more complex operations in GPT-3.5 :man_facepalming:

Its faster and much more accurate…


OK I figured it out from VB’s reply: The model in Web GPT is indeed GPT-4.