GPT 4o inefficient long answers when used for coding

It’s ironic that while GPT 4o is faster, which is nice, it is less efficient. This is because it persists on making long answers when used for coding.
I have made a custom GPT and even set my personal preferences to that i only want small code snippets, even if asking about bigger parts of the project.
For some reason, it PERSISTS on trying to give me the completed code of a whole method or even class where it changed literally two characters on one line.
Here is an example that led me to writing this post: I am missing “()” when calling a method somewhere. Instead of just showing me where, it rewrites the whole freaking class just to add the “()”.
It’s cool and all that it has the ability to remember a whole class, but how can i get it to only give me feedback on the missing part?

Right now what i do, is interrupt it and edit my question from before (in frustration of course) writing “ONLY SHOW ME THE PART YOU CHANGE ONLY SHOW ME THE PART YOU CHANGE ONLY SHOW ME THE PART YOU CHANGE”. Like a mad man :smiley:

Before the recent 4o update, it was way better at keeping it simple.


Ah, the duality of man! Before gpt-4o everyone complained the model was lazy and never gave the full code, and now we’re seeing the opposite!

Perhaps you can set something different in your custom instructions for how the model should behave when proving code in its output?


Haha!. I was trying to be nice. I like that it’s faster and not as lazy.

But that is exactly what i did, custom instructions where i ask it only to give small code snippets when i am asking about code and even a custom GPT only for the purpose of trying to make it stop giving completed code all the time, by asking it to give small code snippets and even providing examples of cases where i want small code snippets.

1 Like

I agree completely, it’s very nice but a bit overly verbose sometimes.

Try making a custom GPT with this in the instructions:

When providing modified code to the user you must be concise, provide only the modified snippets rather than the full comprehensive code. The user is short on time, so it’s important that you keep it short and to the point.


GPT-4 was lazy. So OpenAI trained version 4o to write longer codes, but this led to ‘dumb’ behavior where it can’t determine when code is actually necessary. There are many cases where GPT-4o repeats code verbatim, claiming it made changes when it didn’t.

It’s frustrating that a year after using GPT-4, the supposed ‘best we have’ is GPT-4o. Intelligence is something that has been left aside.


Sometimes it’s irritating, specially if it modifies just some row… And on the other side it’s very useful when there is one hard… The solution could be in the prompt, where you can can specifically ask for a concise code, but it seems it is not responding…

Agree 100% on this issue. I keep referring it to the instructions (which says to skip explanations), but it ignores the instructions, and goes on with long explanations, and repeating code again and again.


yeah we agree both use cases are good : the whole code then you can just copy/paste faster than editing a few lines or just the modified lines because you don’t want to edit/paste the whole code, you just want to know the modified code. Both should be possible and true if you ask for one or the other, the IA should comply. Maybe someone can share a prompt or custom instructions that work ?

Same impressions indeed, to program I went back to GPT-4

Unfortunately GPT-4 coding performance seems to have declined since the release of GPT-4o. Has anyone else noticed this? I am genuinely curious if this is just a circumstantial finding on my part. If this is true, does anyone have an explanation of why this would happen? Either way, neither 4 or 4o are reliable coding assistants for me now.

It’s important to consider different use cases for code editing to accommodate various preferences and needs. Having the option to either copy/paste the entire code for quick editing or just select the modified lines for precise changes can enhance efficiency. It would be great if AI tools could offer both capabilities seamlessly. Sharing prompts or custom instructions for achieving this would be valuable.

Yeah and when it skips some important lines or changes something it should not touch. Honestly this model does not even see the change in code you provide it. So it makes that mistake a lot. Talks too much and does not follow instructions well. Is a bit frustrating at times. Try asking it to stop talking and you see the fun part.

I’m not trying to be a conspiracy theorist, but maybe someone is benefiting from long answers (more output tokens $$$) :stuck_out_tongue_closed_eyes:

oh boy, i literally have the same issue. I asked it to save in memory not to give full code several times. it completely ignores the instruction unfortunately.


Thank god I’m not the only one that saw this. I have been dealing with this and it’s like 4o is gaslighting me worse than 4. I’m new to coding and I started in Feb and I like it but it’s hard, which I don’t mind. It’s just I’m watching 4o biff code when I feed it proper examples to work off of. So I’m on here looking to find a way to compensate and get back on track. I do not trust it for troubleshooting like I used to.