ChatGPT 4o model feedback after a few months of usage (coding)

I was trying the 4o model and it is faster, but that’s all it is. talking to it is not actually faster becuase it give much larger responses and it takes so much time to read that it’ll be faster to do a google research instead.

Another problem it has is that it reposts the code from previous responses with or without changes and weather or not there is need for the code. For example I asked “write me C++ function to trim white space from a string.”. Then after I got the code I asked “Does the function isspace() include \r \n and \t” and it answered , but then it reposted the thole code with the text “Here’s the updated example with the implementation, reaffirming that it handles all standard whitespace characters:”. And the code has zero changes. The longer the conversation goes on the more code it accumulates and the longer reposes become.

So in effort to reduce the amount of code in chat I asked it to not post the whole code every time - a full C++ class, but only changed parts instead. I had some success - 50/50. But after a post or two it returns to posting the whole code again.

Also the 4o model was supposed to be better than 4.0. I don’t think it is. The responses are not that great, lengthy and it doesn’t listen to guidelines how to respond.

If I have to choose one thing to improve in 4o - don’t post more code than needed. That includes don’t repost code when there is no need to, but also if there is a question about specific method of a class - only re-post that method and only if there are changes.

And in general all models are bad at changing code. I can’t just copy paste the changed code, because often it’s incorrect, incomplete or bugs are introduced. I’d rather get instructions how to change the code. Just few lines so it’s clear what’s changed. It’s fine to write a lot of code if I’m asking for writing new code, but when changing code I can’t just replace code with AI fantasies.

1 Like

I use ChatGPT including both the gpt-4 and gpt-4o model quite a bit for coding related work. As for gpt-4o, I do generally not have a problem getting it to return only part of the code provided I am explicit about it in my instructions.

I agree that by default it returns the full script, which can sometimes be frustrating. Perhaps just play around a bit with your wording? As a rule of thumb, avoid phrasing it negatively (i.e. avoid statements like "do not return the full code) and instead phrase it positively (i.e. “Only return the amended code snippet”). Using this approach, I almost always have no issues
getting the model to do it.


I have noticed the same thing with this new 4o model at first I was like wow this is great and faster. But faster does not mean better here. It just responds with code when Im not even asking for it to the point its freezing up browser. I changed the notes to specifically respond with code only when asked. I changed main account prompts too. I tell it to stop, stop, please stop and it just gives me previous code back, basically my working code but with errors important stuff taken out. This is after given it a working proper file also. I do something else come back and its talking to itself so fast I can’t stop it. Im not even sure its responding too? I tell it I like to please discuss options and talk about tech and its still ripping through all code or code I already have for no reason. Its rather annoying, I really hope this is fixed. Still the best assistant for AI but this is needs some improvement even trying to change wording I still get gibberish and code i dont want going is circles for hours. Also this must be huge strain on server since 90 percent of response are useless to me. The other 10 percent which I asked for and had been properly responding to my prompts was fabulous and super helpful.

1 Like

I find myself going back to gpt-4 for coding related topics.


Just tossing out an idea…

The absolute best AI model I’ve used for super complex math is ** im-also-a-good-gpt2- * I hesitate in spreading this around because I use it all the time on hugging face arena because it’s solid.

(usually do a side-by-side with Claude opus - which isn’t bad at math either)

If anyone feels like it why not try some coding with it see if you think it’s better the same or worse than GPT-4o :woman_shrugging:

I think the missing instruction syndrome may be happening due to potential approximation done during self attention. The approximation probably helped during training and faster inferencing but at the cost of response quality. But if anyone knows any more details on this would love to know more.

That’s really interesting as:: *not following directions to the letter and skimming over some details and recognizing other details- i’ve looked at as both a flaw and perhaps a necessary adaptation- * it ain’t easy being spontaneous with a plan, or creative where:/when the outcome is all you look at - if I was to put forward these ideas to GPT - I could think well, when I watch those voice demonstration’s at their last thing- and others they posted online- The way ChatGPT voice was acclimated and would self correct- even in really subtle ways, inflection, tone along with words and ideation- extremely sophisticated in my view and perhaps that sort of leeway would be necessary in allowing for some latitude in response - attention, weights and and all the technical jazz - just me speculating and sincere … * interestingly this is entirely different than “I am also a good GPT2” * it will sit and iterate over and over the same problem calculating and re-calculating its chain of thought is obvious in session, it’s much different than - at least the presentation of GPT-4o