Considering the hypothetical yet so much probable doomsday scenario where the human race faces the existential threat due to its own creations, intelligent machines; we started the article series “Stopping the rise of Machines” to discover things on robotic Armageddon.
Even though an intelligent machine who (which ?) is hell bent on destroying the human race would have so many convenient ways of doing so rather than going in to all the trouble of building Terminator like robots. But that is a discussion for another day. But say it does happen, then what is there to stop them from killing us ?
One of the simplest (arguable) ways would be to have a certain set of rules in the robot’s base code. It wont be morality, it wont be common sense, it certainly would not be goodness. For the robot it will be a bunch of if else statements in its base program on which its entire existence is based upon thus it will have to follow them.
So if I were to create a robot today, then what are the rules which I can implement which will be simple and also prevent the doomsday scenario ? We have to be careful on what we define as rules because unlike we humans the machine will take things extremely literally thus unless otherwise specified it will only focus on the task at hand.
For example if we set the rule
“Save me from any kind of harm”
This is simple enough. Yes, for us humans. But a machine (probably) will take this command to the extremes, it might try to ‘save me’ from germs and other possibles threats by imprisoning me in a clean room, it might see a pedestrian accidentally going to bump in to me and will ‘save me from harm’ by blasting the pedestrian to kingdom come, it might predict all the stress that I might get due to work or interaction with people and decide to ‘save me’ by eliminating those people.
So we see that even with a simple rule, things can go wrong. With that in mind, what are the rules we can focus on. Well there are many, but most famous is the Issac Asimov’s 3 laws of robotics (Actually its 4)
Issac Asimov was a science fiction and science fact writer and in his ‘I, Robot‘ books; collection of nine stories about ‘positronic’ robots ; he lay out the “Three Laws of Robotics”. (Later he added a “zeroth” law, designed to protect humanity’s interest. )
1st Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2nd Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3rd Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
The question is, with these or similar set of rules will we be able to make sure that machines will not turn on us ? Well, we have to go beyond the comparison of machines to humans, we have to focus on the problem with the mind set that the concepts such as ‘morality’, ’empathy’, ‘compassion’ or even ‘anger’ are out of the question. We cannot and shouldn’t say or believe that as machines get intelligent they will become benevolent. *Come on, how many intelligent people we know who are total jack asses). Intelligence doesn’t make a person good or bad, so it wont suddenly make machines ‘goody two shoes‘.
There is this question also, since the machine would be running a program, since a program has to follow its code, and since we are going to put these rules as conditions, doesn’t that mean that the machine will always have to follow them whether it wants to or not ? Yes true, but that is assuming that the machine is not intelligent enough to find a loop hole, or to bypass the specific set of code, or to rewrite its own code. If it is intelligent enough to do so, then the approach of putting rules in machine base code will not do much good to us, would it ?
Next : Stopping The Rise Of Machines 03 – (to be decided)