Why is AGI the goal?

Why is AGI the goal? Humans have failed at making social media safe due to inherent bias and unaligned value systems, what makes us believe AGI will be any different, yet has much higher stakes of destruction?

1 Like

Someone will invent a machine with no limits. The question is who and what goals will they have?

2 Likes

Exactly, so is it that companies like OpenAI want to achieve AGI to combat the potential of bad actors who also develop AGI? In that case, why allow public use? We continue to develop nuclear weapon technology, but that technology has significant barriers to entry. Everyone has a computer and access to data. Of course there are technical limitations of personal computers, but there are surely workarounds.

I think (and correct me if I am wrong). AGI is still hypothetical, there’s no guarantee (like most things) that it will for sure be achieved.

“AGI is not likely” is not a good answer. If you assume that it’s not possible but you’re wrong, you miss out. The risk, as you say, is too high. The opportunity cost of ignoring the risk is subsequently high. Therefore the only logical course of action is to proceed as though AGI is possible and to do it right.

If AGI is not possible, we will still solve many problems along the way. If it is possible, then we’re more likely to do it right.

3 Likes

Great response, clarified nicely.