Hi guys, second post here on the forum.
In my understanding, AGI is a robot, powered by Artificial Intelligence, that can do any real-world-job that an average human can. What OAI currently has is similar to a brain in a jar.
For example, I find the new Optimus Robot serving drinks to be much closer to what AGI stood for 10 years ago:
now, for some reason, it seems OpenAI’s interpretation of AGI is this:
I get that there is Figure1… but that’s another company, not OpenAI themselves.
And it seems that bringing safe AGI to the world doesn’t just mean a brain in a jar or building one capable robot, but mass producing capable robots so anyone can buy them.
So, back to my question: Why is OpenAI not working on AGI?
The second stage was arguably reached with the release of o1
.
The third stage is being worked on. We should begin to see a lot of tooling capabilities and deep integration soon.
So you can drop one of these agents into your company and it would be able to perform a number of tasks: Suggest marketing material, respond to customer requests, perform training sessions, read, transform, & organize data (invoices, receipts, statistics, etc).
Basically, perform a lot of the lower level stuff so that humans move towards a supervisory role.
Well, I think that is the wrong way to look at what AGI means. That is my point.
“When discussing Artificial General Intelligence (AGI) over the past decade, many experts have defined AGI as the ability for a machine to not only replicate cognitive tasks but also to exhibit motor functionality and other general capabilities akin to a human being. AGI differs from narrow AI, which is specialized for specific tasks like image recognition or language translation.”
My take on this is that if they are going to just focus on the cognitive capabilities, it is not AGI! The company started out with a mission to build AGI, so that is why I’m asking why is OpenAI not working on AGI.
A robot with a full body but is dumb as bricks isn’t anything besides a novelty.
Technically many industrial plants have these types of robots that can achieve their very limited scope with perfection.
You can “teach”/program a robot to play tic-tac-toe, but it can’t then look at a scrabble board and play that game.
So OpenAI is more focused on having a “brain” that is smart enough to generalize whatever task it thrown at it, and then maybe the thought of giving it a body would be next.
I just think they are taking too long to get started on the robotics part of it all. It feels like they are no longer focused on the full problem that is AGI.
It would be great to hear from OpenAI if they have plans to get started on robotics or at least an update on the timeline of it all.