OpenAI’s AGI strategy By Ben Dickson

I’ve been frequently sounding the alarm on the path that OpenAI has taken since it started its partnership with Microsoft. I’ve argued that the artificial intelligence lab has gradually swayed from pursuing science to creating profitable products for its main financial backer.

OpenAI CEO Sam Altman put some of my doubts to rest this week with a blog post in which he laid out the lab’s plan for artificial general intelligence (AGI). Regardless of where you stand on the AGI debate, the post includes some interesting points about how OpenAI plans to tackle the challenges of AI research and product development. And I think this is important because many other research labs will be facing similar challenges in the coming years.

Altman also leaves some questions unanswered‚ which might be fair given the constant changes that the field is going through. Here are some of my key takeaways from OpenAI’s AGI strategy.

Source: Tech Talks

Seeing how Sydney was initially managed really surprised me. Was it a failure of communication? Ignorance? The things that the bot was spitting out was very alarming (Some of it R-rated). Their solution of essentially limiting the conversation length was also strange. It wasn’t like some serious prompt injection was required, some people were falling into this pit by accident. Seemed like a complete backstep from cGPT.

After seeing the AGI post I felt a bit relieved as well. However these same issues are clearly going to occur again. Value alignment is critical and based on what happened with Sydney, it seems that even massive companies with their own departments invested in this technology aren’t being proactive.

I am hopeful, but not optimistic.

1 Like

Rush to get it out? I dunno. Even I’ve had better success against prompt injection so far…

Maybe it had to do with the super large context window? More room to hang themselves?

Right? These things were rudimentary, and already solved!

A simple moderation API in the pipeline would have prevented some of these issues. I’m sure it’s all very complex and complicated, and beyond my scope. However it’s just all… so strange…

1 Like

The biggest hurdle to AGI is model architecture and compute resources. Right now LLM’s are Stochastic Parrots, or in plain English, auto-completion engines, incapable of thought. This is the nature of the Transformer Architecture that these models run on (even with RLHF).

To get to AGI, a deeper things like Knowledge Graphs need to be employed, but these are hard to auto-generate, meticulous to do by hand, and also add computational complexity. The Transformer can be parrellelized, and is very powerful, but is also limiting too. But here is Andrej Karparthy’s explanation of the Transformer (he was a founding member of OpenAI and lead AI engineer at Tesla). It’s a great RNN, but more has to surround it to get to AGI, IMO.

2 Likes