It might not be directly tied to your example, but I have a general remark on this:
As someone who has a master’s degree in Deep Learning with a focus on NLP, and someone who uses AI in work and personal projects all the time, I can tell you that some apprehension of this technology is ABSOLUTELY warranted.
Now, I think there are 3 main concerns people might have, and I think their validity differs:
-
fear of AI “taking over”: This is basically the fear of what happens when AI becomes too complex and smart for us to understand. It does not have to be malicious, but just as chimps wouldn’t be able to stop us if we decided to eradicate them, we wouldn’t be able to stop AI (at some point)
-
fear of AI being used in malicious ways: Deepfakes, misinformation, etc. This is not a fear of the unknown, but a fear of what is already happening and will surely be on the rise. We already know how much damage fake news can do in our highly polarized world.
-
fear of over-dependency on AI: In my eyes, this point is often overlooked by … everyone. If we start using AI for everything, and those models become larger, more complex, and therefore more exclusive to very few powerful individuals/corporations, that is a dramatic shift of power and a decrease in independence and freedom for the vast majority of people.
So, IMO, the first point is overblown. With our current methods (i.e. how neural networks are trained and built), it is not a question of when we will reach AGI, but IF. The AI we build are very task specific. And AGI is a bit more than just slapping together multiple specialized AIs (e.g. add a chatbot to 4 robot limbs and a vision system).
The second point, most people get quite well. It is currently not solved, and though there is a simple fix, AI companies wont do it because they dont actually care that much about those ethical issues. (the fix is to sift through the entire dataset, REMOVE all names from the text tags and then train GPT and all the other models again FROM SCRATCH. But ofc that’s too expensive)
The third point seems to get no attention at all, but is IMO the most likely to cause huge issues. Imagine that Google and Amazon will sell your data to various clients who will then use AI to predict … everything about you. Your health risk profile will serve healthcare not to give you better treatment, but to milk you for more money. Your YT feeds and google searches will be used to predict who you vote for. how likely you are to commit crimes, etc. CCTV will be obsolete, bc AI will know everything about you already. Imagine getting sentences for crimes you didn’t even commit … yet.
(there’s been a case in the UK where a couple was sentenced (falsely) based on a probability calculation, so this is no Sci Fi)
I think in most cases, it is not the fear of the unknown. It is precisely the fact we know - or at least suspect - how such powerful tools can be abused.
why do we like democracy? Because power is distributed instead of concentrated. A benevolent dictator could maybe improve many things, but a shitty one will make it so much worse. We don’t like concentration of power in politics, yet we do it with AI…