Discussion about the future of AI

OpenAI has openly acknowledged the impacts and risks associated with AI and AGI. Can this forum please have a category for discussion about this topic, since it seriously impacts the entire future of humanity. The OpenAI board care strongly about this topic, but it’s not reflected here.

I love technology and never expected to experience the current level of AI intelligence within my lifetime. Yet, already a single prompt can generate many lines of code without the requester having any knowledge of coding. Soon, the code will become irrelevant and requests will be answered without us caring about the code, much like we use electricity without everyone knowing how it works. And then, AI solutions will be making the requests to other AI solutions within an AI knowledge explosion and humanity will stop being the smartest on the planet - which is already happening. AI is advancing at a superior rate to humans.

I thought my job would be safe as I deal with people, but now I realise, that AI can do a better job. Corporates will take the cheapest approach by replacing humans with AI for profitability. The wealthy and corporations will pay to have access to significantly better grades of AI, thus expanding the gap between rich and poor. And that’s why people are talking about humans needing a universal income.

This sounds like the opening of a Terminator movie, but these aren’t the views of a nutter sprouting conspiracy theory. This is the certain challenge ahead of us. And the OpenAI organisation has been having these discussions too, which apparently lead to their recent high profile organisational conflicts. The scales are weighing technological advancement against the best interests of humans and it seems inevitable that technological advances will advance.

Merry Christmas.

1 Like