Isaac Asimov created perhaps one of the most widely recognized concepts in science fiction – the Three Laws of Robotics -with advice from John W. Campbell. Every sci fi buff knows them by heart.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second
They seem clear and unambiguous – but are they?
A little while ago I posted a teaser about how to interrogate an android suspected of murder. The android is subject to Asimov’s laws. It claims it’s innocent because the first law would prevent it absolutely. But the first law proscribes both harming a human or by inaction allowing a human to come to harm. What if those two principles conflict?
Suppose, for instance, a robot knows a gangster plans to murder someone. If the only way the robot can stop the murder is to kill the murderer, there is an irreconcilable conflict. Nothing the robot can do would satisfy the First Law. Both action and inaction are forbidden. A human would reconcile this conflict by balancing the lives of perpetrator and victim to decide what to do. If a robot doesn’t have some sort of tie-breaking mechanism, it could hang up.
The problem with the laws of robotics is that they purport to be absolute laws. To be workable as well as concise, laws have to have a certain amount of flexibility in interpretation. Even Asimov recognized this. It’s in the flexibility that stories are born.