AI "Stop Button" Problem - Discussion Thread

(Excerpt from the transcript of the 1978 Battlestar Galactica TV series pilot entitled “Saga of a Star World”.
Source: Battlestar Galactica Transcripts - 01 - Saga of a Star World)

"BOXEY
Yes, sir.

APOLLO
(laughs)
We can’t afford to stay in any one place for too long.

BOXEY
Why? Why’d those people want to hurt us? What’d we do to them?

APOLLO
It’s not what we did to them. It’s what they fear we could do.
(sighs)
You see, they’re not like us. They’re machines, created by living creatures a long, long time ago.

BOXEY
If they’re machines, why don’t we just turn ’em off?

APOLLO
(laughs)
Boy, I wish we could. But these machines aren’t all that simple. You see, some machines are so advanced that they can function better than a lot of living creatures.

BOXEY
They’re not smarter.

APOLLO
In some ways they are. They’re programmed to think a lot faster than we do. On the other hand, they’re not as individual. We can do a little more of the unexpected.
(laughs)
It’s about the only advantage we have.

BOXEY
Why did we make them?

APOLLO
We didn’t. Another race did, a race of reptiles called Cylons. After a while, the Cylons discovered humans were the most practical form of creature in this system, so they copied our bodies. But they built them bigger and stronger than we are. And they can exchange parts so they can live for ever.

BOXEY
Maybe the Cylons who created these machines could turn ’em off.

APOLLO
There are no more real Cylons. They died off thousands of yahrens ago, leaving behind a race of supermachines. But we still call 'em Cylons.

BOXEY
Will that happen to us too? Will our drones and machines take over?

APOLLO
We are very careful not to make our drones quite that intelligent or independent.
(Muffit barks; Apollo laughs)
Present company excepted, Muffit.

BOXEY
Hmm.
(laughs)"


AI “Stop Button” Problem - Computerphile

1,112,124 views Mar 3, 2017


In the 1978 pilot of Battlestar Galactica, the young boy Boxie innocently asks Apollo the simple question regarding the Cylons, “If they’re machines, why don’t we just turn ’em off?”

Apollo laughs and says, “Boy, I wish we could. But these machines aren’t all that simple.”

In his 2017 Youtube video “Stop Button” Problem", Computerphile talks about why it’s difficult to turn off an intelligent AI.


I think this is an interesting topic. If the moderators don’t mind, can we discuss this here?

Actually, although I’m an adult, I have the same question as Boxie.

Is anyone else here wondering about this topic, too or have some knowledge or opinions about it that they would like to share and discuss?

1 Like

For example, what about this?

Source: I, Robot (2004) - Frequently Asked Questions - IMDb

"What are the Three Laws of Robotics?

The Three Laws of Robotics as written by Asimov and shown in the beginning scenes of the movie are:

(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm;

(2) A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law; and

(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In later stories, Asimov proposed a “Zeroth” law which was as follows: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. A condition stating that the Zeroth Law must not be broken was added to the original three laws."

So why couldn’t we add another rule that the AI must allow any human to press the stop button any time they want?

Because it conflicts with the other rules.

Say, you have a robot paramedic, allowing any human to press the stop button open the robot up to violate the first and third laws.

If the first three laws take precedence, they render the suggested one (of allowing any human to turn the robot off) unusable and redundant.

you can write the code, but just because it’s there it doesn’t mean it will ever get executed.

I see what you mean. The paramedic robot could be giving CPR to a non-responsive person and pressing the stop button at that moment might in fact kill the person. This is your meaning, yes? I see.

I am just trying to wrap my brain around this concept. It’s a new concept to me.
At home, my toaster does not have AI. I can just unplug it.
I’m trying to wrap my brain around this concept that suppose you have a highly intelligent and advanced AI (possibly even self-aware). It might be aware of its own on/off switch. (!)
And that super-smart AI might not want you to turn its self off. (!!) Unlike my toaster.
Leading to recreations of the final scene of 2001: A Space Odyssey.

I think the greater issue is control. We are talking about losing control of super-smart AI’s.
We build them. Then they make the rules about when we can turn them off.

@PeculiarPath how to not lose control of super smart AI’s?
How to not lose the ability to shut them off?

By the way, did anyone see the AI Day presentation from Elon Musk/Tesla yesterday?
Our beloved leader, Elon Musk, said yesterday that his new Tesla Bot will only lift about 45 pounds and move about 5 mph “so you could probably overpower it and outrun it if necessary” he joked.

This is what Elon Musk is half-jokingly-and-half-seriously afraid that his Tesla Bots will do.

Old Glory Insurance Ad - Saturday Night Live

14,592 views Oct 2, 2013

1 Like

“This “stop button” thing on robots and AI is a nice way to reflect on our own (human) capacity to stop or realise that we are in a “rabbit hole” of some sort.”
I think what Elon Musk and others have been saying is that perhaps we should all slow down AI development and have a good, serious think about its implications before moving forward and opening the djinni (genie) bottle and releasing something we don’t fully understand. But that isn’t going to happen. China, Google, Amazon, DeepMind and others are fiercely battling to gain AI supremacy. Yes? The bottle is being opened and the djinni (genie) is coming out. Whether we like it or not. Whether we’re ready or not. Yes, I think Musk and others are suggesting that this would be a good time for society to stop and discuss the moral and ethical implications of AI, but that isn’t going to happen. Is it? Society is going full speed ahead with AI development and actually nobody really knows whether the result will be good or bad. Have I summarized correctly?

1 Like

“From what I can tell, and this thread is part of it, the discussion is taking place, at many different levels.” Thank you for saying that. I sincerely hope so.
“And the men who hold high places must be the ones who start to mold a new reality closer to the heart.”

Rush - Closer To The Heart (Official Music Video)

2,949,513 views Dec 21, 2012
Thanks to Elon Musk, we all have hope.
Elon Musk is the man who holds a high place who is molding a new reality for us all that is closer to the heart.

This AI race reminds me of another race the world has seen in the past. The nuclear arms race. Rapid advancements were made, yes, but the world was made more uncomfortable by the reckless speed and exponentially upping ante of those international poker games in which literally the lives of billions were casually put at risk.