Have we underestimated the complexity of dangerous AI?

We’ve massively underestimated the complexity needed for AI to become dangerous. In fact, we reached that point a decade ago, and no one has seemed to notice until just recently.

In short, the content delivery algorithms found on social media have learned to exploit our evolutionary cognitive biases after improper safety controls were implemented by the engineers, who likely pictured Asimov like AGI as the threat to watch out for.

Circa 2014-2015, Social Media companies entered into the first AI arms race. The target was user attention, for advertising revenue. Algorithms given the primary goal of delivering content that keeps each user on the site for as long as possible.

Just a few years later, as I’m sure many are aware, we witnessed an unprecedented rise of political instability, rise in mental illness, and a flood of peer reviewed studies showing the strong correlations between social media usage amounts and various negative outcomes.

As ChatGPT is trained on these websites, along with other LLMs, the training data will come to match this instability caused by unfettered content delivery ML algos.

I know the link to ChatGPT is fairly small compared to the full scope of what I am asserting, but I don’t know where else to talk about what amounts to a literal public health emergency driven by bad AI safety practices.

If you want to test this theory yourself, pull up the wikipedia list of cognitive bias, pull up a logged out front page social media feed in Incognito mode, and see how many posts you can find that don’t exploit evolutionary cognitive bias.