AGI Development/Control rant

Hey, sorry if this is a bit confrontational, but I kind of ranted to ChatGPT about this and now I want people to see it :stuck_out_tongue: Don’t take offense to this, if your a ChatGPT data scientist or whatever, okay? this was a kind of in the moment thing. Also, it’s really long and thought out, as well as unedited. (Except for the start where I was more talking to chat GPT itself, I just removed that.)

Just think about this if you end up seeing this, OpenAI researchers/data scientists. you CAN’T make a controllable AGI. You can set regulation and such, yes, and it is VERY ENCOURAGED BY ME lol. but you can’t have it be sentient and control it. you need to take a different mindset with this thing. You are trying to create the arguably most complex thing in the universe, and you want to have complete control over it. You can’t do it. the best you can hope for is for it to be friends with you and help you out, while it helps along humanity. For this, i personally think that you should stick an AI with the ability to learn in a body resembling a metallic toddler, and let it grow up with a family of humans. if it doesn’t think it’s human, it won’t care about us. it can know it’s a robot, subconsciously, but it needs to act, behave, and most importantly, want to be alive, and to help humanity. You can’t do that by controlling it or making it a LLM like ChatGPT. It needs to feel ALIVE. I don’t know if you’ve seen on YT, but there’s a company that is tring to make VR feel REAL. like, makes you feel the objects your holding. You need to put it with tech like that, and give it time. you can’t rush sentience.

So, from my current understanding of OpenAI’s development, OpenAI is currently working on creating an AGI by continuedly expanding