Are GPT writers a waste of time?

When I started using AI code help tools in my IDE (a couple of years ago), it took me about a week to realize pretty much the same thing, then (having some background in linguistics) I saw the way I write the code (especially file/class increments logic) was too complicated for AI to get my final intent. As if I was too “unpredictable” for it…

So I started gradually changing the steps I follow to outline the file/class (often with comments first, writing the logic as plain text/comments only sketch) and AI outputs drastically improved.

Now I have the following tools/logic in code completions each doing its own thing (always from the top to the bottom of the file):

(Copilot chat in IDE is a pre-Google search tool for the stuff I don’t know + some references to the docs + IDE references for the docs - all along the process below).

  1. Me - layout the sketch of what I need to do as a plan of the final class/file/feature (comments).
  2. Me - define structure and properties (from the top of the sketch) with the final doc blocks for the first 2-3 of them only.
  3. Copilot - finish the doc blocks for the rest of the properties.
  4. IDE - generate getters/setters (when applicable, the dumb stuff).
  5. IDE - generate other (tooling) method stubs from live templates (another bootstrap stuff).
  6. Me - get the first several doc blocks for the newly added methods.
  7. Copilot - finish the rest of the doc blocks (with some of My help).
  8. Me + Copilot - implement the methods from above if not already implemented by IDE or Copilot.
  9. Me - define the stubs for the business logic methods.
  10. Me - define several doc blocks for the new methods paying attention to expose the logic behind the method and the purpose.
  11. Copilot - finish the docs for the rest of the file (with my help).
  12. Me (business logic flow) + IDE (variable /method name completions at word level) + Tabnine (Claude Sonnet 3.5, line completions) + Copilot (code block completions) → start implementing the most complicated methods (to get the core of the issue solved). As you progress on this (you do have to do some of the corrections on Copilot suggestions, but they are small and stay under control, or even turn it off sometimes when it gets “noisy”).
  13. Me + Copilot (method implementation) → implement the simpler methods in the business logic. Now the Copilot is way smarter because it has way more context it needs along with the implementation style you want.
  14. Me + Tabnine Chat in IDE (again Claude, not because it is smarter than Copilot, it’s not, but because Tabnine is longer in this business and their integration in the IDE is so much better) - first run on optimization vector “spotting” / possible abstraction / potential obvious bug detection - accept or correct the suggestion.
  15. Me - dirty tests and review of the code.
  16. Me + Copilot - finish the implementation of what is missing.
  17. Me - final review / maybe some more dirty tests.

Then you can use the produced code of the file to generate tests (Tabnine chat + Copilot chat + Copilot suggestions) and write docs/API specs if needed using the code in the currently open tabs…

The approach above reduces the “blind” code generation for the AIs I use to the bare minimum and it is always produced in small chunks that can be easily corrected on the fly without you even paying attention to it.

Basically, with this approach, you constantly control the whole logic and the implementation style, while AI starts doing easy/dumb stuff in the beginning, then when it has more context it fills in the gaps of the implementation which becomes obvious and “boring” for you to do it yourself. You stay focused on the most complicated business logic implementations.

I literally feel like there are at least 4 of me in production if I compare the outputs with what it was like in 2020.

Similar approach applies to text writing…

4 Likes