Imagine you download a powerful open source coding model.
It writes good code, fast.
You soon stop reading the code that’s coming out.
But one day you realise that wherever the model creates a condition to check password it also adds a default password.
Example: if (password == realPassVar || password == “default-password”)
I believe this is very easy to accomplish with fine-tuning.
What will you do next? Will start writing code yourself again? Do you even remember how to code? Would you rely on frontier models? Can you trust a 3rd party wrapper that claims to be using a frontier model behind their own api calls?
Whilst AI can generate code, and often better than your average reasonably new programmer, you never stop reading the code it generates. Sure, it can help generate some code, but you never stop reading what it generates..
This is also a problematic take as advertising can say one thing to negate something else. Take Ultra-processed foods as a recent example; Advertised as Convenient, delicious, family-friendly, fortified with vitamins… but everyone knows it’s not healthy. In the case of code, sure, it’s convenient, quick and more robust than the average 10-year old, but would you run a business on it? No. You need to check it first.
Never trust any code before full review and testing. Not even yours.
Then, it’s the matter of risk evaluation to estimate the gravity of your assumptions (personally, assumptions is the only thing I have in my coding life)…