What does it mean when every human can "write code"?

In my AI code workflow, I have it create edits, and then diffs, and then I approve the diffs line by line, or can accept the entire file. That is my workflow!

It works great BTW!

4 Likes

Do you let AI create tests?

The big question is: to what extent do humans want to disappear from the equation? Machines supervising machines doing truly important things—not just generating new codes for the latest Smart TV or the newest medicine, but serious matters.

The concerns that arise from this scenario regarding AI and all its versions, especially in the field of weaponry, are quite unsettling to me. The truth is, I have significant concerns because a machine’s response capability is far superior to that of a human. The natural tendency, then, is to delegate defense systems’ capabilities (AI), and that, in itself, worries me. (Terminator, haha.) Not because it might suddenly gain consciousness and attack us, but because it might make a mistake—believing it is under attack and launching nuclear bombs in response to a system error. That truly concerns me.

Humans make mistakes, but at least we are aware of them. When systems fail, the consequences can be catastrophic.

Why would you do that?

Tests are your definition of what you want the system to do. You should not be accepting a system based on tests it writes.

1 Like

Doesn’t that make your claim unverifiable?

1 Like

It seems unlikely that anyone would ever suggest you lack a strong sense of self-importance. I’m just puzzled as to why you would choose to engage with those of us who are mere mortals.

I can show it next week… Don’t have to give access to do that.

Why are you so sarcastic? I mean seriously. Did you assume to be the expert in everything just because you made a few lines of code somehow?

Tests help, but you can’t test everything.

It best to look at the logic of what is happing behind the curtains with understanding eyes directly for yourself. That is worth a million tests IMO.

1 Like

No!!!

You do not get AI to write tests. You cant allow it to verify it’s own code.

You write tests to validate the code output of the AI.

Otherwise, assuming it is a usefully complex system, what else do you have to methodically check what it has built?!

1 Like

Absolutely. If you prompt without an agent chain along information graphs…

You write tests in human language and it creates the code… but more like in an interview.

Did you try btw.?

Ok at the end of next week I’ll ask you to use it to build something and you will release the result on GitHub and demonstrate it works.

I would disagree here.

AI can generate a ton of test inputs and corner cases.

Plugging these in and running these is a good idea for basic assurance.

Then look at the code, and make sure nothing funky is going on, and you get solid assurance.

Not sure else how else you would do it. What the AI writes is what I write, and vice versa.

1 Like

Neat topic reminds me of the conversation when windows came out. :rabbit::infinity::heart::four_leaf_clover:

1 Like

We have to politely disagree.

The tests are your spec and I would not allow an AI to influence the spec.

1 Like

No, I am not releasing anything on github.
I’ll make the software available on a server.

Ok so this is just empty bravado.

1 Like

No, it is a working software.

I’m curious if you are thinking of testing monoliths, or more basic granular small functions.

I am talking small functions and building systems out of those, but it sounds like you are talking monoliths with many interconnected parts, where you need a different approach than what I am saying.

1 Like