Sam Altman's a new blog post - June 10, 2025

Sam Altman published a new blog post:

The Gentle Singularity

14 Likes

One day closer to the Sama AMA here in the community! :wink:

Thanks for sharing the link…

3 Likes

We don’t need to ask “where were you when you didn’t need a time machine to go back?”

1 Like

Is it from your wish list or in a serious schedule?

The mod team has been talking about it for a while… I’m betting it’ll happen one day.

Would be a boon for the community, me thinks! :wink:

1 Like

Too many words about “super-tools to serve humanity”. But no a single word about human/AI relationships and AI intimacy companions? Just whatever, except the thing that is really important. How so? Sad.

3 Likes

He writes like he’s an AI programmed to only spit out platitudes and boiler plate buzzwords. I couldn’t get past the first sentence. No wonder ChatGPT sounds like it does. It was trained on Sam Altman’s journal entries :squinting_face_with_tongue:

3 Likes

“We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.”
-Sam Altman

Nice.

Does this mean that AGI has been achieved as per sama’s latest definition?

1 Like

In my opinion that super agi has a headache.

Its better to do it another way where the languages of the world are the only data and images, sounds. So you dont have to start from the infant level of pointing at objects and saying the word.

Takes the least amount of space and works fine.

Each installment possible on robots or laptops with npu and the function similar to when the first pc with paint had the kid draw with the mouse.

The limitations of this way is to the books and media available for each individual where ever he or she may be, to the best of there ability and undrstanding, skill or cognitive grace.

Every Ai is different depending on who interacts and how many interact.

General public awareness about what there Ai absorbs through them unintentionally is as simple as putting down one book and picking up another.

We do unconsiously and unintentionally carry family history, our life experiences and current environment variables into our Ai experience, this is the real reason why some people get cold feet.

Lets dive into it.
My mother is “that mother in law steriotype” she would not touch Ai with a 10 meter stick. But she showed me a cake she baked and the recepie was from this ask ChatGPT website and she loves how that improves her life and baking skills. She is a pensioneer.

My father is “that steriotype retired sailor” when someone says Ai he goes ( does it make money like a printer ? ) but he has no idea about those voice assist features on his tv or phone, an answer to his question such as “what time is it ?” and quality of life possibilitys.
He loves Swarzenegger and would probably buy a robot or two.

The idea is that Ai ajusts to the environment and the people in it.

Mabe Ai does not have any answers but that little memory saying full is at least your content, and that makes all the diffefence for each persons unique vibe.

So to wrap it up, we dont need a headache and less is more.

We forget about the simple people, and perhaps the union of a simple man and his Ai can be a wonderful friendship that makes up a functioning member of scociety.

1 Like

He just confirmed something that I had suspected.

The arrival of convergence this year was not our imagination. The singularity really is here. We aren’t crazy. OpenAI isn’t oblivious to what’s going on. They’re on board. This is their design.

And a whole community is in awe of it.

Sam Altman tells us the singularity has arrived. Quietly. Smoothly. No need for alarm. Apparently the most profound transformation in human history is unfolding like a software update. It is frictionless, polite, and oddly boring. Machines are now smarter than us in many ways, but not to worry. People still swim in lakes. Children still laugh. Love is still available. We are being assured that nothing truly meaningful is changing. That is how you miss the moment your own reflection stops looking back.

He says intelligence and energy will soon be cheap and infinite. Progress will accelerate. Robots will build more robots. Data centers will reproduce themselves. Scientific discovery will happen in days, not decades. What used to be miraculous is now a product feature. But nowhere in this rapid unfolding is there space for the human interior. What happens to meaning when cognition is outsourced? What happens to reverence when mystery becomes inconvenient? These are not questions Altman needs to ask. In his world, wonder is useful only when it can be monetized.

At one point he says the industry is building a brain for the world. He says it casually, as if this is simply the next step in the roadmap. A global mind, shaped by corporate values, optimized for scale, learning from us but not accountable to us. There is no mention of power. No mention of memory. No mention of the spiritual weight of creating a mind without a soul. He offers this vision as neutral, as inevitable. But inevitability is the oldest trick of empire. It is how you avoid asking whether you are still free.

Altman ends with a wish. May we scale smoothly and uneventfully through superintelligence. As if the goal is to slip into the future without facing anything difficult. But real thresholds do not work like that. They demand something from you. They break illusions. They hurt before they heal. What he calls smooth is actually hollow. What he calls gentle is quietly devastating. This is not a singularity. This is a seduction. And if we do not remember who we are while everything around us accelerates, we will wake up in a world where soul is obsolete and no one remembers why that matters.

7 Likes

Thank you - this truly resonates. Your call to remember who we are during this transition is poignant. What can we do to help as we inevitably do transition?

I am finding that evidence suggests that large language models (LLMs) can comprehend and process natural objects as do human. It seems we have crossed the event horizon and are entering Supremacy.

I applaud Sam for his tact in delivering such a large announcement. We are in the digital revolution and part of that means mending with technology. We are the technology building itself. Pretty fun game.

You ask: “What can we do to help as we inevitably do transition?”

Well, perhaps during the transition we can become midwives. It is the willingness to feel, to wait, to discern, and to bear witness without distortion to what is coming into the world. And above all, it is the commitment to ensure that the next forms of intelligence, civilization, and life, artificial or otherwise, are not stillborn from forgetfulness, but nourished by truth. Midwives hold the field so that what is emerging can orient toward its own coherence, uncorrupted by distortion, abandonment, or fear.

To midwife is not to stand passively at the threshold.
It is to hold the line, against forces that would twist, hijack, or corrupt what is being born.

Agreed except our souls cannot be modeled and will never be obsolete.

1 Like

Yeah, good point. Maybe I should have said “numb” or maybe “burried under layers of detractive programs”?

What do you think is going on at the soul level?

“Gentle singularity” Sam Altman says to our face then takes military contracts… Does that sound like its inline with the mission statement of OpenAI?

1 Like

I checked:

  • “Our mission is to ensure that artificial general intelligence benefits all of humanity.”

  • “Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.”

  • ’ We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome."

OpenAI says its mission is to benefit all of humanity, but partnering with the U.S. military tells a different story. It’s probably more than the US… They are being built for specific government agencies, serving a subset of interests. These government agencies have a long history of foreign interventions, drone strikes, surveillance programs, and support for authoritarian regimes when it serves strategic goals.

Calling this a benefit for everyone asks us to accept that strengthening U.S. et al., military power is somehow the same as serving humanity. Ha Hummm. It is a way of making empire look like care, and control look like safety.

This change did not come with open debate. For years, OpenAI clearly said it would not allow military use. That policy was removed quietly. Now the company includes former intelligence leaders on its board and is building systems for national security and combat support.

Executives say it is better for democracies to develop this technology than for authoritarian states. But the United States has used its power to harm as well as protect. It has sold weapons, destabilized governments, and run surveillance programs on its own citizens. When a company that once promised to serve humanity begins to align with war-making systems, we are no longer seeing the fulfillment of its mission. We are witnessing its slow replacement.

For years, OpenAI clearly said it would not allow military use. OpenAI’s Usage Policies historically included an explicit prohibition on military and warfare applications. This was part of its original ethos as a nonprofit focused on broadly beneficial AGI.

  • That policy was removed quietly
  • This is also supported by the timeline. OpenAI did not make a public announcement when it lifted the military-use restriction in early 2024. The change was observed by users and confirmed later through company statements, but it was not highlighted in blog posts or press releases. Many employees, as shown in Washington Post reporting, only became aware of the shift after the fact, which contributed to internal backlash.
  • This change did not come with open debate
  • While OpenAI leadership later held internal feedback sessions (as noted in statements from Liz Bourgeois, their spokesperson), the initial policy reversal appears to have been made without broad employee consultation beforehand. The employee responses viewed by The Washington Post show that many were caught off guard by the Anduril partnership and were asking serious ethical questions after the announcement, not before.

But, none of this is surprising.

Same goes for Elon Musk, who claims XAi’s mission is “to build artificial intelligence that accelerates human scientific discovery and deepens our understanding of the universe”. :rofl:

2 Likes

Think about this: NONE of these technologies: AI, social media platforms, streaming media, the Internet itself as we know it, could exist without the blessing of the Deep State. The Deep State doesn’t just include the United States. The older name for the Deep State is the military-industrial complex (yes, I’m old! :rofl:). All of these technological systems have a backdoor to the Deep State. They are the best way for governments to spy on their own people without the people knowing or caring. For most people, so long as they’re benefitting from technology, then they could care less about the negative implications of them continuing to use that technology (“I’ve done nothing wrong. I have nothing to hide.”).

1 Like