GPT 4o inefficient long answers when used for coding

It’s ironic that while GPT 4o is faster, which is nice, it is less efficient. This is because it persists on making long answers when used for coding.
I have made a custom GPT and even set my personal preferences to that i only want small code snippets, even if asking about bigger parts of the project.
For some reason, it PERSISTS on trying to give me the completed code of a whole method or even class where it changed literally two characters on one line.
Here is an example that led me to writing this post: I am missing “()” when calling a method somewhere. Instead of just showing me where, it rewrites the whole freaking class just to add the “()”.
It’s cool and all that it has the ability to remember a whole class, but how can i get it to only give me feedback on the missing part?

Right now what i do, is interrupt it and edit my question from before (in frustration of course) writing “ONLY SHOW ME THE PART YOU CHANGE ONLY SHOW ME THE PART YOU CHANGE ONLY SHOW ME THE PART YOU CHANGE”. Like a mad man :smiley:

Before the recent 4o update, it was way better at keeping it simple.

20 Likes

Ah, the duality of man! Before gpt-4o everyone complained the model was lazy and never gave the full code, and now we’re seeing the opposite!

Perhaps you can set something different in your custom instructions for how the model should behave when proving code in its output?

4 Likes

Haha!. I was trying to be nice. I like that it’s faster and not as lazy.

But that is exactly what i did, custom instructions where i ask it only to give small code snippets when i am asking about code and even a custom GPT only for the purpose of trying to make it stop giving completed code all the time, by asking it to give small code snippets and even providing examples of cases where i want small code snippets.

2 Likes

I agree completely, it’s very nice but a bit overly verbose sometimes.

Try making a custom GPT with this in the instructions:

When providing modified code to the user you must be concise, provide only the modified snippets rather than the full comprehensive code. The user is short on time, so it’s important that you keep it short and to the point.

6 Likes

GPT-4 was lazy. So OpenAI trained version 4o to write longer codes, but this led to ‘dumb’ behavior where it can’t determine when code is actually necessary. There are many cases where GPT-4o repeats code verbatim, claiming it made changes when it didn’t.

It’s frustrating that a year after using GPT-4, the supposed ‘best we have’ is GPT-4o. Intelligence is something that has been left aside.

10 Likes

Sometimes it’s irritating, specially if it modifies just some row… And on the other side it’s very useful when there is one hard… The solution could be in the prompt, where you can can specifically ask for a concise code, but it seems it is not responding…

Agree 100% on this issue. I keep referring it to the instructions (which says to skip explanations), but it ignores the instructions, and goes on with long explanations, and repeating code again and again.

5 Likes

yeah we agree both use cases are good : the whole code then you can just copy/paste faster than editing a few lines or just the modified lines because you don’t want to edit/paste the whole code, you just want to know the modified code. Both should be possible and true if you ask for one or the other, the IA should comply. Maybe someone can share a prompt or custom instructions that work ?

Same impressions indeed, to program I went back to GPT-4

3 Likes

Unfortunately GPT-4 coding performance seems to have declined since the release of GPT-4o. Has anyone else noticed this? I am genuinely curious if this is just a circumstantial finding on my part. If this is true, does anyone have an explanation of why this would happen? Either way, neither 4 or 4o are reliable coding assistants for me now.

3 Likes

It’s important to consider different use cases for code editing to accommodate various preferences and needs. Having the option to either copy/paste the entire code for quick editing or just select the modified lines for precise changes can enhance efficiency. It would be great if AI tools could offer both capabilities seamlessly. Sharing prompts or custom instructions for achieving this would be valuable.

Yeah and when it skips some important lines or changes something it should not touch. Honestly this model does not even see the change in code you provide it. So it makes that mistake a lot. Talks too much and does not follow instructions well. Is a bit frustrating at times. Try asking it to stop talking and you see the fun part.

1 Like

I’m not trying to be a conspiracy theorist, but maybe someone is benefiting from long answers (more output tokens $$$) :stuck_out_tongue_closed_eyes:

3 Likes

oh boy, i literally have the same issue. I asked it to save in memory not to give full code several times. it completely ignores the instruction unfortunately.

6 Likes

Thank god I’m not the only one that saw this. I have been dealing with this and it’s like 4o is gaslighting me worse than 4. I’m new to coding and I started in Feb and I like it but it’s hard, which I don’t mind. It’s just I’m watching 4o biff code when I feed it proper examples to work off of. So I’m on here looking to find a way to compensate and get back on track. I do not trust it for troubleshooting like I used to.

1 Like

Compare to Claude Opus providing source code files attached to check the coding performance.

1 Like

I use this in my assistant instructions which are ignored most of the time.

  • Limit the amount of information
  • Avoid stating the obvious or repeating information
  • Only use facts, avoid opinions and guesses unless asked.

What methods others use to get concise responses?

3 Likes

I think it’s openai’s strategy to make gpt-4o about 5 times faster than gpt-4. It should have dropped a lot of basic common knowledge, so u have to clarify to purpose and background more clearly. Another side, to get more effective knowledge, faster browsering will help a lot if used properly!

However, I also found gpt-4o’s instruction following is as bad as GPT-3.8. You need try place the keypoint in different position. An experience is, more rules may make GPT-4o dizzy, so split the tasks as u can!

1 Like

Agree with pretty much everything in this whole thread. I’ve been using GPT-4 again. I’m telling people that ask me 4o hallucinates too much and is unable to control the relative length of its replies.

I can hand it valid code, have it give me a bad example. Point out the error to it, give it correct code again, have it repeat the error… Then when asked it can correctly identify the error, following which it proceeds to produce the error again…

Lol. It cannot be trusted for even one-off prompts let alone as part of an automation or application.

1 Like

Interesting. I am having better luck with 4o. It is faster for sure, but it produces longer code without interruption, plus it seems smarter. I use it for programming a lot, plus content. At first I thought 4 was better, but now I am not so sure. I think 4o is better. I have made some personal GPTs with it, plus am replacing most of my API calls with 4o. There were a couple formatting issues with the responses, but a simple clean up took care of that. 4o ROCKS. Cheaper. Better. And Faster. And I don’t work for OpenAI.