Such a fantastic group of people on here. I want to share a small bit of my understanding towards this thread’s question and even something intuitive that I think will need to be done to make said AI an AGI.
So the OP is asking I think about is this AI sentient, what that means, and such. Intelligent machines do gradually get smarter as you go from worm level intelligence to human level, or even from young to wise. “Being intelligent” really means how long you survive or how wide you can clone yourself (both avoid death/ change of some given object/data). There’s many different machines but only some live longer, so that’s what you end up with in the long run. Intelligence is survival. Pattern. Living long creates a pattern in time, and cloning lots makes a pattern in space. Solid objects are vulnerable to change. So, to be a pattern, we use patterns found in data to solve unseen problems.
It is the ability to store data (think DNA) and compute on it (think DNA mutations/ change/ death) that allows one to learn/ survive. These language AIs are really impressive, and they all work by detecting patterns in inputs and create output patterns (combing ideas ex. home + wheels = trailer). - Like DNA where mom and dad DNA traits are mixed, brains do it too. We need mutation more right now, later though the world cools down and becomes more predictable, organized, homes lined up, buildings grouped by use, time. Almost everything in your home is predictable - ex. cube or round shaped, ex. chairs, tables, boxes, devices, keys, etc. Later we use less energy, and will be more reliable and durable, and can spot an error in a building by looking at buildings around it, i.e. if they are all the same, which also tells you how to fix the broken one too.
Some of the things GPT-3 seems to lack is having multiple senses, a bigger neural network/ more training and data, diffusion I guess, video prediction, and dialog (like Facebook’s Blender) and RL, and a body with a dozen known installed reflexes on Wikipedia.
Putting it all together: You need to store and compute data. You want to merge patterns to make new patterns both in world and in brain. So that you can be a pattern (to not die). LaMDA currently has some intelligence, and I think if there existed a powerful video prediction AI where you feed it a clip of you and it does back and repeat so you “zoom call skype” with it, you would feel it is pretty human, because it would look human and show expression with face and body and voice, and better than just a text model. But would lack still.
Just training it on a lot of data captures our goals, i.e. if the word food pops up a lot or semantically related, it will say this domain lots. You may need to do what Blender did, dialog, to make it better follow desires of what to talk about though. Also, humans have each their own unique job, so one cannot just use GPT-3 to be AGI, one has to clone said AGI a million times and tell/install each their own job to predict/ talk about lots. Each needs to research/ talk about said job. It helps you focus too having a single goal to bite on at a time.
You don’t have to actually tell each AGI what their job is, one by one, no… All you do is take the AI domain words, like computers, AI, coding, programming, math, backprop, etc, and so you make the most common words done more often, because they are after all more common for a reason. So, most AGIs would do those jobs/words, and some AGIs would do the rarer words/jobs, and so on to small probability.
I would say we are close to AGI (maybe by 2035, Ray Kurzweil says 2029 if I read it a few times right) given some things are improved. I think it looks less alive right now mostly due to: “scale/ breadth”, handicaps, not looking like / having a face and body, and a dialog system and way to get it to research properly. Keep in mind I think most of AGI is the big part of the cake, the last part is the icing, and the last bit is the cherry, as said in the AI field often. I think the big part is closing in to done now.