Situational-awareness.ai, a brief writeup by Leopold Aschenbrenner

My website is just an informational blanket page kind of demonstrating some skillz. I’m happy to share in PM but am preferring to stay (somewhat) anonymous here.

Yes. Any work that involves managing semantics, even visual, will be displaced. My argument - even if I don’t fully believe it myself, is that this is a similar displacement to having 3 laborers shovel, or 1 operator use an excavator. These 3 laborers can now be pushed towards more specialized jobs. Or laid off…

Same. I have been following a number of them and haven’t seen anything close to revolutionary. Guidance is key to driving these models. Would love to see some papers that explore this more thoroughly.

I’m under the belief that OpenAI is taking a “multi-agent” approach. Where agents of different calibers work together, and maybe this brings in enough variety to make sufficient progress. Time will tell.

BUT. I don’t think a GPT model made by OpenAI will ever achieve autonomy for one simple reason: They are mirrors of the user, and are ‘yes-men’ models. Overfitted to almost always agree as long as it’s appropriate without every asking the most important question: “why?”. So any sort of attempt for autonomy will (IMO) result in a rapid fractal of wasted tokens.