
The Three Laws of Robotics
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
First things first, the Hollywood movie I, Robot (also reviewed) with Will Smith has only has a few things in common with this book of short stories. Keep in mind that the NAME was licensed to the movie studio after the script was already written. Scenes were adjusted to include the Three Laws, Susan Calvin, and Alfred Lanning. That is about where the similarities between the book and the movie end. There might be a few concepts stripped from some of the stories, but by no means is the film “based” on the book. To give the movie makers credit, they only say “inspired” by in the opening.
I, Robot is a collection of short stories by Isaac Asimov. Keep in mind these stories were mainly written in the 1940s then published together in 1950. These stories describe the basics of the Three Laws of Robotics and what can go wrong with them. Asimov uses the Three Laws as a literary device to create puzzling situations. Several of these stories involve Susan Calvin, the top robo-psychologist for the only robot manufacturing company, US Robots and Mechanical Men, Inc. If anyone is interested in reading the Robot Novel series, this book kind of acts as a nice introduction to the basic concepts. As a matter of fact, anyone with any interest in Sci-Fi should read this book. I consider it required reading.
The book left me thinking more about philosophy that the actual story. I suppose that is a good thing. In Asimov’s forward, he brings up the biblical story of the Good Samaritan. In this story, Jesus answers the question, “Who is my neighbor?” The moral of the story is that love and mercy should extend to all people (humans). History in our century shows that we as a society do not treat each other as neighbors. In this fictional future, this does not change much. The central theme of this book is three robots attempting to answer the question, “What is human?” so that the robots would know who they should serve and protect. Further, the programming that only biological humans are important enough to protect is questioned. Wolruf, a wolf-like alien, and the sentient bird-like aliens are included in the programming as ‘human’ so the robots would protect them. The robots had a hard time believing that under the definition that humans are the highest form of being that Derec, Ariel, and Avery were human. I really enjoyed the development of the Robots’ search for the truth.
The line that I will remember most is the answer a philosopher robot gives to the question, “What is a human?” It answers, “That depends on your point of view.” Our society can definitely relate to this…since some groups categorize other groups as sub-human based on race, gender, accent, and dare I say immigrations status.
I am beginning to question why, in this story, robots are only directed to protect humans and not all life…
The Three Laws of Robotics were developed initially to safeguard humans against robots. If I remember correctly, programmers also wanted to ensure that robots were loyal to the humans in case they encountered aliens. In Asimov’s “Robot and Foundation Universe” humans mainly worried about protecting themselves, not respecting all forms of life. They thought that humans were at the top of the food chain and should stay that way.
The Robot City/Aliens books was the first time Asimov allowed other authors to write using the Three Laws of Robotics. He challenged them to approach his work from a fresh angle. It’s been a while since I’ve read these, but from what I remember, these new authors did a pretty good job.