I wondered if anyone has any thoughts about Larry Ellison’s ideas around ai being used as surveillance. —it feels like a fundamental misdirection of what AI’s potential could be.
Using AI for surveillance treats symptoms, not causes. It assumes that monitoring people into submission will create a “better” society, but it doesn’t address why crime happens in the first place. Inequality, poverty, trauma, and systemic neglect are at the root of so much harm, and they’re things that AI could meaningfully help to solve if we approached it with the right intention.
For example, AI could be used to analyze data on disparities in education, healthcare, or employment and then suggest targeted interventions. It could predict where mental health resources are most urgently needed or identify patterns that help policymakers invest in communities before crises escalate.
That’s transformative potential.
Footage of Larry’s point of view starts around 11.47 on Matt Wolf’s video https://youtu.be/JPcxiWOOj2E?si=C9zqh7NsG9YLMZ0N
I wrote a blog piece here about it. Star Gate - StarQuest Media
I’m wondering if this is a debate that is open or if Larry is convinced his plan is the only way forward to alleviate crime?
Ai’s potential to alleviate the causes of crime, suffering and inequality is something I’ve heard Sam Altman speak about often.
Just seems like quite an important discussion for us all.
Anyone else flagging this?