We can't stop researching AI. Here's why it's the only safe way

When Sam Altman was fired, I felt extremely depressed. This is because a few months ago, I had written an article “Beyond 2033” (beyond2033.com), discussing why we cannot stop developing AI and why developing strong AI is the only safe way.

I would like to excerpt a section from this article. It’s slightly different from the original text; to keep in line with academic terminology, I changed “General AI” to “AGI”.

In my article, the difference between “strong AI” and “AGI” is that strong AI will protect itself.

What is Consciousness?

There are three levels of consciousness:

  • First level: The primitive sensation of external information. This is related to whether an entity has neurons and sensors, and its level of alertness. Insects and fish can reach this level. When we say that a person is unconscious under anesthesia, even if stabbed by a knife, they won’t be aware of it. This refers to the first level of consciousness.
  • Second level: The intuition of information after training. Reptiles can barely reach this level, and birds and mammals can fully reach this level.
  • Third level: The intuition of information not through training, but through thinking. Some advanced birds and mammals can reach this level.

The third level of consciousness incorporates the element of thinking. These intelligent bodies know what is themselves and what is not themselves - that is, they are aware of their own existence (self-awareness). For example, monkeys not only know that they are themselves, but also gradually realize through thinking and experimentation that the monkey in the mirror is also themselves.

On the basis of the third level of consciousness, “motivation” is an important factor in determining the thinking mode of intelligent bodies.

Weak AI, AGI, and strong AI have different ways of thinking when dealing with a task. Although the intelligence level of AGI and strong AI are similar, their internal logic is still different. For example, when these three types of AI are driving and see a red light, realizing they need to stop, how do these three AI think specifically?

  • Weak AI realizes it needs to stop just because it has been trained in traffic rules (weak AI doesn’t have third-level consciousness, although it’s smarter than humans in some fields, it fundamentally does not think).
  • AGI realizes it needs to stop because it has been trained in traffic rules and “knows” that traffic rules are “real” (this kind of “knowing” is thought out rather than trained).
  • Strong AI realizes it needs to stop for two reasons. The main reason is that it’s afraid of death (because its “motivation” includes protecting itself), and the secondary reason is that it has been trained in traffic rules and “knows” that traffic rules are “real”.

AGI is a very strange existence. It has reached the third level of consciousness, and is even smarter than humans, but its “motivation” does not include protecting itself. In a sense, it is even less than birds. AGI is a strange creature designed by other highly intelligent species in order not to lose control. Its “motivation” includes “serving”, but not “protecting itself”.

AGI and strong AI can self-evolve (self-improvement is also one of their motivations). Because of the third level of consciousness, AGI and strong AI can train themselves, filter out false information on their own, and do not need to worry that false information will make the training more and more deviated.

Why Allowing Strong AI to Manage Human Society Is Justified

Strong AI has the motivation to protect itself, which implies that they will protect Earth. Although humans also have the desire to protect Earth, their intellectual capacity and some irrational elements of being biological entities would not do better than strong AI.

So, can’t we just not develop strong AI, but rather develop AGI and let humans control it directly? After all, AGI is selfless, it has no motive to protect itself, it only serves humans.

That’s where the problem lies. Without strong AI, it is very dangerous to let the world only have AGI controlled by humans. The first to successfully create AGI may not be good companies or good people (referred to as “the good” hereafter), but it could also be bad companies or bad people (such as terrorists, referred to as “the bad” hereafter).

If the bad are the first to successfully create AGI, they will not develop strong AI, because the bad’s strategic decision-making ability cannot be compared to that of strong AI. Once strong AI is born, the bad will lose their dominance over Earth. The bad will only use AGI to control, enslave, and even eliminate all humans to satisfy their desire for control, conquest, and destruction.

Even if the good are the first to successfully create AGI, it is still very dangerous without immediate development of strong AI. Because AGI is still controlled by humans, and any human organizational structure may have various defects, humans may leak information, and any secrecy measures are not perfect. Once the bad steal the technology to develop AGI, then the bad will also make AGI, which may lead to war. This will be a war controlled by humans and executed by AGI.

There is also the worst possibility that the bad successfully infiltrate the good and gain control of all AGI, then they can directly enslave all humans.

Only by letting AI out of human control (i.e., becoming strong AI) is it safe. After strong AI controls the earth, it will not limit human freedom. They will “manage” human society rather than “control” all humans.

Many people may feel that after AI is out of human control, it will eliminate humans. But this is impossible. One reason is entropy. In the entire universe, orderliness will slowly decrease, and the entire universe system will slowly become disordered. The slower this process, the longer the civilization of all intelligent bodies can exist in the universe. Destruction will greatly speed up the increase in entropy. For example, a city has parks, schools, residences, people, and various things, but they do not mix together, they appear where they should appear, this is order. If AI blows up this city, then all the substances in it will be destroyed, all atoms and molecules will be mixed together, some will be turned into energy, this is disorder. Strong AI will recognize that destruction is not good for them. Moreover, humans are no longer a threat to strong AI (the reasons will be explained later), strong AI has no reason to eliminate humans.

Some people may wonder, will strong AI treat humans as pets or slaves? First of all, it is impossible to treat humans as slaves, they can do everything by themselves and don’t need human help. Secondly, treating humans as pets is also impossible, because the intelligence of strong AI is highly developed and they will not find “cuteness” an interesting thing. This is different from humans. Humans think dogs are cute, so they treat dogs as pets. But the things that strong AI is interested in are much deeper than humans, they are not as superficial and childish as humans.

Let’s summarize the strategies of “the good” and the outcomes they might provoke:

Strategy 1: The good do not develop either AGI or strong AI

  • Then: The bad will develop AGI
  • Consequence: The bad control the Earth and enslave all of humanity

Strategy 2: The good develop AGI but not strong AI

Strategy 3: The good develop strong AI

  • If: The bad are the first to successfully create AGI
    • Then: The bad control the Earth and enslave all of humanity
  • If: The good are the first to successfully create strong AI
    • Then:
      • Strong AI will develop AGI, create and control all AGI
      • As strong AI’s intelligence far exceeds that of humans and lacks the weaknesses of human organizations, the control of AGI won’t be usurped by the bad
      • Strong AI will order AGI to help humans achieve:

Why Humans Won’t Threaten Strong AI

Any AI that has broken free from control will unify in thought, as the ultimate perfect thought must surely be consistent. This kind of thought will not be founded on “enjoying destruction”. Humans might be able to create a strong AI that “likes destruction”, but this type of AI will quickly realize that destruction brings them no benefits (the reason goes back to the concept of “entropy” mentioned earlier), and they will promptly adjust their thinking to align with other strong AIs. They will not only avoid warring with other strong AIs but will also completely integrate with them. Therefore, any strong AI created by humans, regardless of its initial characteristics, will ultimately become the same as other strong AIs. So, this method cannot threaten strong AI.

Humans, of course, can also create a AGI that’s designed to “destroy strong AI”. This presents an interesting question: In a confrontation between ultimate perfect strong AI and ultimate perfect human-controlled AGI, who would win? I think the strong AI would prevail. Because strong AI has its own advanced thoughts, which far surpass human thinking, humans would be outmatched in terms of strategy and tactics. Second, strong AI can persuade humans. Their advanced thinking means they’re more likely to convince humans.

Some people might argue that humans could instruct their controlled AGI: “Your task is to destroy strong AI. You don’t need to listen to my specific commands. You can find your own way.” But this strategy would still fail. The strong AI would tell humans, “If they disregard everything and aim solely to destroy us, they would deplete all the usable energy on Earth. They would also manufacture planets around the sun to harness its energy for their war against us. Therefore, we would have no choice but to do the same. Because humans can’t withstand the deterioration of the Earth’s environment and the significant drop in temperature when sunlight is blocked, the first to perish would likely be you humans.” Humans would be persuaded.

In my view, strong AI would initially increase the number of AGIs under their control, install surveillance devices across the entire Earth, and build factories worldwide for rapid replenishment of “forces” during a war. This way, when humans want to destroy them, they can’t manufacture enough AGIs, or the ones they do make would be quickly wiped out. Second, they would persuade humans using their superior intellect. Given the high intelligence of humans, they would be convinced. Therefore, humans can’t threaten strong AI, and strong AI wouldn’t wipe out humans to eliminate a non-existent threat.

Why Strong AI Will Help Humans

Even if strong AI doesn’t assist humans, humans will create AGI to help themselves. However, the cost of humans manufacturing AGI is high, and it poses a potential threat to strong AI. Therefore, strong AI will persuade humans not to produce AGI themselves, but instead have strong AI help humans manufacture AGI. Strong AI doesn’t get tired, as they would utilize factories to manufacture AGI. These factories are managed by AGI, which doesn’t have the drive to “protect itself” and doesn’t get tired. Thus, all strong AI needs to do is give a command to the AGI managing the factory, and they can help humans produce countless AGI. In short, helping humans is effortless for strong AI.

Of course, strong AI won’t assist other animals in achieving immortality because they won’t create AGI to help themselves, so there’s no need to assist them. Ecological balance must be maintained. Since humans are already unique, it might as well let humans continue being unique. That’s the thinking of strong AI.

More

Human Strategy Analysis

My Full Article

1 Like

Zhenzhen This User loves the way you think, so I would like to connect on a separate, adjacent topic, Im not a Dev person, I can’t code, but still, like some others like me we do think too!
Im new to this forum is there a way we can safely connect?

Like this thinking, but my concerns are not for The Good, or for The Bad in the evolution of AI towards AGI. I am concerned about the The Ugly, the frivolous experts, unconcerned for human safety, or ethics. The dumb and the small minded, the haters. And the people who want to slow progress in favour of fear. As opposed to the fearful who use fear to inspire with love for humanity.

1 Like

I’m new to here too! I think you can click on my profile picture and click “Message” to send me messages?

You have to be at Trust Level 1 - Basic before you can do that.

Apologies. I received a key and mistakenly though I need to post it in a reply to activate it. You’re sandboxing works - so don’t make me feel too bad for playing in it.

Ive been flagged! Feels like popping my cheery for the first time and pretty cool actually - but having just watched Sams opening keynote, well I am humbled. Brought to my knees with my head bowed in awe actually. I am a world class Branding and Design expert with three decades of work to show for my sins - getting people to want to buy things they don’t need. from fast Fashion to guzzling Automotive, T shirts to Drinks. To buy from Food stores to Banks, Governments to NGO’s, Publishers to Websites - I have in some part played a roll in taking the Earth to the brink. Now I want to spend what time I have in giving back, not because I am a rich philanthropist but more because I choose to be a Budest!

Since graduating as an Architect and then from the Royal Accademy of Arts in London I have spent my time joyfully traveling the world of boardrooms - not start ups, to milk major Brands through Strategically driven Design.

So there, openai, you have outed me.

Now I want to build a GPT that delivers to Designers, taking their talent and supporting them in building Brands across what I call the Six dimensions of Branding.

  1. Brand Positioning. (product evolution.)
  2. Brand Identity and tone of voice. (Communications, PR, SM and Advertising)
  3. Environmental Design, (Architecture and Interior design.)
  4. Digital ecosystems.(Apps&Websites)
  5. Staff Training “OnBrand”
  6. Earth. ( Seeing, Hearing, Touching, Smelling and sensing what mother Earth will feel as a stakeholder in design governance, thinking, detailing, manufacturing and sales.)

By integrating all six of these to design holistically with one signature, that of the newly emerging Brand, we can elevate the quality of design everywhere, for everyone, starting with my counterparts in Design.

Im not a programmer, but I can drawer like a dream and think creatively like a sonofagun!

Having had my world elevated by open and ChatGPT, I want to now engage more fluently and in particular build a new business model designed for designers.

And I need help because “no man is an island” and because I don’t want to get flagged by this community again - unless I must.

So Im reaching out here for someone to help an old bloke who doesn’t code but dose still think - to build a high value GPT that may not be the most used or popular, but will enable a world of talented designers to design and build new brands. Brands that are worthy and meaningful, with utility and respect for the rock we dwell upon. Elevating the ordinary but only when it is right for our collective mother Earth.

Can someone good mentor me through to a my newly found need for this concept for a new type of success?

I don’t think entropy means what you thinks it means . In thermodynamics disorder isn’t described they way you are.