2,902
edits
No edit summary |
|||
Line 31: | Line 31: | ||
When the concept of robotics was first dreamed of Issac Asimov created a thought experiment called i Robot. In it he imagined the creation of autonomous intelligence in the form of androids. At this point a similar conundrum arose as above. If a machine is developed which has autonomy, how can we ensure that it primarily does no harm to humans. As such he developed the Three laws of robotics, distinct ethical rules to protect humans from the ruthlessness of machines: | When the concept of robotics was first dreamed of Issac Asimov created a thought experiment called i Robot. In it he imagined the creation of autonomous intelligence in the form of androids. At this point a similar conundrum arose as above. If a machine is developed which has autonomy, how can we ensure that it primarily does no harm to humans. As such he developed the Three laws of robotics, distinct ethical rules to protect humans from the ruthlessness of machines: | ||
* '''First Law''' - A robot may not injure a human being or, through inaction, allow a human being to come to harm | |||
* '''Second Law''' - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. | |||
* '''Third Law''' - A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. |