I’ve been trying to use GPT-4o to code and it’s the WORST! It defaults to giving you code that is based on data from GPT-4 it seems and all knowledge is out of date. When you don’t know how to code and are using it and forced to trust it and it gives you out of date code blocks (not to mention just takes out code randomly or replaces code without telling you) that makes no sense, sending you down HOURS/DAYS worth of rabbit holes that never endingly get deeper and deeper because it’s just giving you solutions that are out of date every single time making it worse.
Why the HELL does OPENAI even allow the ‘new’ CHATgpt to be used when it doesn’t even know how to properly source solutions to our questions. What’s a better way to do this? MONTHS I’ve wasted on this thing.
I’m a fan of OpenAI because of what the’ve done for the world but most developers I’ve talked to agree ChatGPT-4o (omni) is a HUGE step DOWN in skill level for coding relative to the original GPT-4, which was good at coding. Most people have moved to Anthropic Claude for all coding tasks. Hopefully in the future OpenAI can get back on top, but they’ve definitely fallen.
I thought I was the only one who realized until I saw your comment. You’re absolutely right. GPT(4o) is really terrible at giving coding solutions. It feels like a downgrade. But I’m hopeful that OpenAI will utilize developer feedback to improve.
You can still use stackoverflow or go to you.com or phind and search for a solution to
your problem. I like how they give references to possible stackoverflow solutions.
Ill be the odd one out here and say that GPT4o has been working wonderfully for me for programming. I’m mostly working in python though, so maybe its language specific? I love that I can get it to spit out 300+ lines of code with few issues.
I’m with spiderpiggie on this one, it has been better than mistral’s new codestral model in my case even. I have had it write lines of 200+ with no issues and it can usually fix itself when i mention an error
I honestly don’t know what these guys are thinking…talk about chaos…gpt is just okay for normal searches…the thing that bothers me isgpt will outright lie to you or mislead you when it comes to anything that is below the surface…they took out dalle access…then they have soros…or whatever it’s called…that does text to video and you can’t even test that!! I’m not using these guys until they get it straight AND the intelligence level AND accuracy of information increases…Had this doing math computations the other day…complex…yes…but I knew the answers…it kept returning the wrongs sums…and then lied about it…then I had to curse at it and actually show the work…before it admitted the error…I even went and double checked the answer just to make sure!! I’m out till they figure out the basics
I think this would be easier to understand with a shared example and what specifically was wrong with the responses. Personally I’ve found 4o to be a huge step up in reliability and the level of complexity that the model can handle. For me this has been the case for Python, JavaScript, PHP, MySQL and Ruby.
Prompting obviously comes into the equation and I have noticed that prompts that are not very concise will result in confusion. I recommend using backticks to surround variable names and always referencing them by their assigned name, rather than some word that describes the variables. Also using example inputs and expected outputs helps immensely - especially for long code blocks.
Lastly and perhaps most importantly, only try to change one thing with each prompt on complex code.
GPT-4o, like GPT-4 is only a tool and the code produced must be checked thoroughly. I mean, GPT’s are only assistants and I never ask the LLM Model to create code for me that I don’t understand.
I’ve used GPT-4o to write some repetitive code and I’ve not had so many bad surprises.
Also, I’ve found that GPT-4 is “better” than GPT-4o as less subject to delirium , otherwise GPT-4o is not so bad a product and for everyday task is much cheaper that GPT-4.
ChatGPT-4 and ChatGPT4o are making more mistakes in coding than before ChatGPT went down a few days ago? Anyone noticed this issue? I used to get help from ChatGPT4 with coding, which is good. But now there are many more mistakes that I haven’t seen before, like changing variables names to something irrelevant to the code… E.g. geo_ip to geo_op, screen_height to screen,height!! what happened?
Ahora que lo nombras tienes toda la razón, desde que se ha quedado colgado esta resolviendo peor los problemas del código. En mi caso Python. Estoy desde la versión gratuita. Y llevaba varios días con un código y al final la medida propuesta todo un desastre y luego le pides que te lo arregle y es incapaz.
Found the problem. LLMs are not silver billets. You should not rely on them to perform work normally done by a professional in any domain. Gpt4o is immensely helpful to people who DO know how to code (the more experience you have the more valuable the llm is). We’re a long ways off from llms being a complete replacement for a programmer.
Hi there.
I ONCE asked gpt4-turbo to “imagine” a simple algorithm for a maze exploration, with constraints on distances in paths.
Well, you know what ? It gave a really silly answer.
I just wanted to see how it could deal with code, and then conforted me with what I already knew : FORGET IT, DON’T USE IT FOR THAT, keep real developers do their job.
And no matter what certain could think, and LLM will never replace a good teacher.
Same experiences as described, I had a similar session with some simple coding, it just kept on giving ridiculous answers and was unable to execute ant of the provided script.
And it kept on giving answers like, “Ok i will do it now” without any follow up action… Please openai revert these last changes.
I noticed this behavior across the board. In the last two weeks it has become completely non factual and it makes up almost everything. Even when I ask to modify some code, it comes up with a completely different version that has nothing to do with mine. I ask some questions about a PDF, and it answers with something that is completely unrelated. It’s broken again. Worse than ever.
When I asked GPT4o to analyze the working Python code written with GPT3.5 Turbo, he said that these methods are not yet implemented in Python and the program will fail with an error)
Does anyone agree that chatGpt 4 was a useable coding assistant before the release of 4o? Maybe I am hallucinating but I remember being happy with the initial release of 4. If this is true, does anyone have an explanation of how 4o release could affect 4 negatively? Either way, both 4 and 4o are utterly unusable coding assistants for me now. Claude is much more reliable.