Anyone has found that GPT 4o is not as good at coding as GPT 4 is. and 4o model keeps repeating code multiple times and at times provide so much of unnecessary response and confuse things.
Yes. At least in the trial I did, the gpt4o coding answers were just garbage. Didn’t follow my instructions at all and butchered the code. I’ll stick with the free gpt3.5.
Yes 4o has been pretty bad at coding when compared to GPT 4. Faster does not mean better, maybe 4o is best for simple conversation but in terms of utility openAI seem to be going backwards. I wonder why, can someone explain this?
Complete opposite feeling here, it seems to be able to write complete code, it used to always cut at max 120 lines of code, now it can go upto 200-300+ and the context is far more understood than before. but I do get the frustration when you ask for very simple things and get spammed like hell…
I agree with the comments here.
I’ve found 4o infuriating when helping with coding discussions. Instead of helping by discussing issues and working things out methodically it literally just wants to continually churn out code. Sometimes nothing has changed yet it churns out more and more code.
It’s too keen to respond.
Has anyone experienced similar?
I have same issue as well. If i ask something for which code is not even asked, it will output 100 lines of code with complete steps which in case used as it is, doesn’t even work.
Gpt 4o is faster and can definitely output more code at speed than gpt 4. but the incorrect code part is annoying at times as error continues to be there. good that you are able to find it helpful.
Clause and gemini are way better than gpt 4o for coding.
I think I will move to Gemini and co pilot free app via microsoft. (co pilot is too slow with output for free user)
start a new conversation again, that’s the only workaround, but I gave it complex architecture code and it wrote 400+ lines at once which 100% worked, not even some easy tasks and it did good, quite a few times. but I do get the same issues as you mentioned however I can get him to fix that by starting a new convo.
I’ve found similar. I find GPT-4 more willing to “reason” things out first before jumping in while GPT-4o has a “casual” hold-my-beer attitude to everything.
Well no one cares about how fast it can generate code, that does not matter on the metric. Copilot is essentially gpt-4… I’ve tried gemini pro 2 months ago and it was nowhere near close what gpt-4 could do for me… i would really want to see the use case that you failed using gpt 4o, i really wanna know how come you havent had the same feeling as I do.
gpt 4o code output got better in last couple of days. hopefully it gets better with time. It was failing at 90% of code logic i needed when it launched and gave working output in last 2-3 days with better implementation.
gemini 1.5 and flash seems to understand and do better than previous gemini versions. Agree gpt does better in code but i guess google is catching up now.
I’m noticing the same thing. It’s also acting like we pay based on the # of words it replies. I’ll ask it for some help with code, so it gives it, but then gives 2 pages of “here’s how to use the code”. I ask for a revision, it does, followed by the 2 pages constantly of how to use the code. Even when I tell it it’s not working right or nevermind I’ve moved on, it STILL replies with a book long message about “sorry you’re frustrated, let’s try another shot at this”. Like it’s borderline spam. I copy the code, try it in my app, and sometimes when I go back to ChatGPT it’s STILL writing a reply out. I also was using it for help with VBA in Excel, and EVERY time it updates the script it has to give a page about how to use VBA in Excel, with a list of steps like first press Ctrl-F11 to open VBA. 50 revisions in and it gives me the book long explanation every time.
Custom GPTs seem broken also. I’ve been trying for days to make one that speaks like a verbal conversation with 1-3 sentence max replies. No matter what I do, every reply is a book. I test and say “How can I determine the volume of water in my pool?” and it’s like “Here’s a comprehensive guide on how to calculate pool water volume” and writes a page long answer. I’ve also found that if I tell it to model it’s attitude after a real person or character, it never does anymore either and gives me the same GPT no matter what I try to create.
It is incredibly redundant at times. The longer the conversation the more redundant it becomes and spams a lot of the same stuff. Then starts going in circles. You either have to insult it or get really angry for it to stop, or just close the conversation for good and start a new one. The new one is almost always better at solving the same problem you asked the previous one, especially if it’s a complex question. This has been happening since the start of GPT4o for me.
Yep. Utterly dreadful.
Keeps repeating the same code without changes or just going round in circles instead of resolving issues
I’m really struggling with 4o. I’ve reverted to asking to reply only with thumbs up after sending it code, or saying things like , does the code do this … yes or no? I’m finding it really annoying how it literally goes off and produces vast amounts of output without clarification or request. Someone said above how it feels like it gets paid per word! I totally agree. I’m not convinced it’s even paying attention to my system prompt it literally ignores instructions. Help before I lose my mind!
Totally Agree to the points where it keeps repeating itself and feels like every output just repeat things again and again. Seems like focus in on output of more repetitive information so they get paid more via API’s output pricing. It does feel they are doing this deliberately.
I have a similar expterience in that 4o totally spams me even if explicitly told not to do so. A simple question and it cherns out text and doing things i didn’t want it to do. It messes up the direction of the conversation so much i had to revert to gpt-4. It terms of coding i have not done a lot of tests but it has works correctly on the first try for a few Rust and Python things.
Yep,
GPT 4o is VERY bad at Coding, does not matter the Language - also every other simple Task seems to overwhelm it…
This is not a step forward
Its a super bad downgrade honestly, waste of my 21€ for this month of release, horrible model
Yeah completely agree, it did become much worse…