Is ChatGPT 4 really good at coding?

Few days ago, OpenAI announced “major improvements” to ChatGPT 4.
This announcement really seemed like Apple Store’s apps release note :

  • fixed bugs
  • improved app

which can usually be translated by : “fixed bugs, went to meetings and worked on things no one cares about”

So I had very low expectations for this new version.
But I have to be honest : ChatGPT 4 got REALLY good at coding .
I’ve been using it for a while for pair-programming (debugging, refactoring or juste writing specific methods). The code provided often failed to meet the requirements, so I sometimes took some inspiration from it but rarely used it as it was. ChatGPT4 also regularly hallucinated even after I asked him to specify his reasoning step by step before providing the solution, which led me to insult him once - or twice (and I still feel sorry for that).

Since few days, not only it stopped delving deep into the tapestry of engineering but it also started to provide code snippets which are fully functional and well written. He seems to understand way better the specifications, the requirements and the constraints he is given.

Here are few prompts techniques I use to get better answers :

  1. Start by explaining the situation and the problem you are encountering (if any) or what you want to achieve. Give details about your stack and any other info you consider as important (for example a code snippet / error messages…). Avoid sharing secret informations (api keys for example).

  2. Ask it to rephrase : “First, I want you to rephrase the situation / problem / constraints / requirements to make sure you perfectly understand it”.

  3. Define the ideal solution : “Think step by step and give me a solution which is simple / modern / robust / clean / performant / easy to maintain / which fully respects the requirements and the constraints / which doesn’t introduce regressions”

  4. Once you get the answer, you still can improve it by providing clear feedback !

2 Likes

Hi!
Thanks for sharing your perspective. I also noted some positive changes.
Still, haven’t done extensive coding with the upgraded model, yet, but did note that the replies about Python code now include type hints.
During a conceptualizing session, where I usually talk to myself using the chat interface as a tool to articulate my thoughts while the model replies with “something” I noted that the conclusions the model derives are seemingly better.
So this is a second positive development.

On the other hand, we are back at a stage where the custom instructions are completely ignored. It’s not a biggie but annoying to have a feature that doesn’t work and effectively clutters the context window. I am back to copy-pasting this additional info into the chat-box just like back in the old days.

It was before; however, the quality of GPT 4 has drastically reduced since the past week. I used to rely heavily on GPT 4 for coding and writing complex files and structures, but for the past weeks, it has been throwing any random answer, often with errors. There is 0 logic and each new message has a completely different point.