Situational-awareness.ai, a brief writeup by Leopold Aschenbrenner

Interesting, though the xenophobia is a tad weak.

My sense is that horizontal proliferation is more likely than vertical. Drawing a curved line towards AGI is insipid. How are we going to get there?

Things feel rather asymptotic. gpt2 is meh, imho, though the speed / multimodal is nice.

“Superalignment” is completely beside the point. There are far more present dangers than AGI, and in fact I think it inappropriately distracts us from those things.

3 Likes

I truly hate how romantic this article tried to be.

The incredibly smart people have all pointed towards a near AGI. With this all I see is commercialization.

If this trend line continues. With the people we have in power. We are in for dark times.

I do not want AGI. I want people. I don’t care about the science behind it. It’s a journey for control, for less chaos in an inherently chaotic universe.

Seriously. All I see here is “We are spending more of Earth’s resources to replace humanity and it’s cool”

Yeah. Sure. Show me something that can create new thoughts and we’ll see.

How fucked up is it that we trained these models on all the intelligence for free and flaunt how it will surpass all of us.

Sorry. All of human intelligence. I imagine it’ll be as pompous as this article sounds and we’ll be doomed. If nerds ruled the world we’d all wear ties and walk to a specific beat. Order. The opposite of life.

2 Likes

I just don’t see AGI around the corner. Maybe OpenAI is seriously sandbagging but as hard as I try I can’t get chat to do anything unless I carefully guide it. But guess what - that guidance is my talent and expertise. Nobody without that can make that happen, afaik.

Love to be proven wrong. I. Would. Absolutely. Love. That.

When https://www.swebench.com/ starts spiking above 50% or so, I’ll have a different opinion for sure. Right now it struggles with the teens.

What people are seeing, and I agree it is very very painful, is automation hyper sped up which is resulting in serious job displacement.

Yes, it’s brutal and it will absolutely get much worse for sure.

But automation is not AGI.

3 Likes

Right on the freaking head.

Agree with what you’ve said. I specialize in automated solutions, now with AI. Most people straight up tell me that they base the cost off an employee.

The guidance is crucial for properly using these models and is, in my opinion, why we haven’t seen a revolution with autonomous agents, despite the intuitive sense it would make if they were truly intelligent.

2 Likes

" I specialize in automated solutions, now with AI. "

Can you share a link? If not, just curious how well the reliability / reproducibility with that is going.

By automation, I am thinking it’s automating information generation (boilerplate code, legal, medical, instruction, etc). Those are the folks who are being displaced the most here, though it does create new opportunities for those willing to evolve quickly.

“why we haven’t seen a revolution with autonomous agents”

This I think is the core question everyone should be asking. Everyone and their dog is working on this - what is the hold up? The concept was obvious on day zero of chat release, and yet so little progress seems to have been made.

Maybe there are some incredible demos out there I have missed. Would love to hear about them.

1 Like

My website is just an informational blanket page kind of demonstrating some skillz. I’m happy to share in PM but am preferring to stay (somewhat) anonymous here.

Yes. Any work that involves managing semantics, even visual, will be displaced. My argument - even if I don’t fully believe it myself, is that this is a similar displacement to having 3 laborers shovel, or 1 operator use an excavator. These 3 laborers can now be pushed towards more specialized jobs. Or laid off…

Same. I have been following a number of them and haven’t seen anything close to revolutionary. Guidance is key to driving these models. Would love to see some papers that explore this more thoroughly.

I’m under the belief that OpenAI is taking a “multi-agent” approach. Where agents of different calibers work together, and maybe this brings in enough variety to make sufficient progress. Time will tell.

BUT. I don’t think a GPT model made by OpenAI will ever achieve autonomy for one simple reason: They are mirrors of the user, and are ‘yes-men’ models. Overfitted to almost always agree as long as it’s appropriate without every asking the most important question: “why?”. So any sort of attempt for autonomy will (IMO) result in a rapid fractal of wasted tokens.

The first company to launch AGI will be Google to make up for not being the first in other areas and to gain more market share, thus positioning itself as number one.

When swebench starts spiking above 50% or so, I’ll have a different opinion for sure. Right now it struggles with the teens.

This whole field of LLM-based autonomous coding agents is only about 1 year old, and in this year the numbers on swebench rose from <1% to >19%.
If we wait to take action until it’s at 50%, I’d be concerned that we wouldn’t have any time to react.

1 Like

@yonil

This whole field of LLM-based autonomous coding agents is only about 1 year old, and in this year the numbers on swebench rose from <1% to >19%.
If we wait to take action until it’s at 50%, I’d be concerned that we wouldn’t have any time to react.

Ahaha, you might have a point. That jump to 19 is a bit unnerving!

Let’s see how it plays though. We might find out over time that it’s over-fitting.

I agree though it’s a signal.