There is much screaming lately about possible dangers to humanity posed by AI that gets smarter and smarter and more capable and might -- at some point -- even decide that humans are a problem for the planet. But some seminal science-fiction works mulled such scenarios long before even 8-bit home computers entered our lives.
The original submission cites Isaac Asimov's Three Laws of Robotics from the 1950 collection I, Robot.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The original submission asks, "If you programmed an AI not to be able to break an updated and extended version of Asimov's Laws, would you not have reasonable confidence that the AI won't go crazy and start harming humans? Or are Asimov and other writers who mulled these questions 'So 20th Century' that AI builders won't even consider learning from their work?"
Wolfrider (Slashdot reader #856) is an Asimov fan, and writes that "Eventually I came across an article with the critical observation that the '3 Laws' were used by Asimov to drive plot points and were not to be seriously considered as 'basics' for robot behavior. Additionally, Giskard comes up with a '4th Law' on his own and (as he is dying) passes it on to R. Daneel Olivaw."
And Slashdot reader Rick Schumann argues that Asimov's Three Laws of Robotics "would only ever apply to a synthetic mind that can actually think; nothing currently being produced is capable of any such thing, therefore it does not apply..."
But what are your own thoughts? Do you think Asimov's Three Laws of Robotics could ensure safe AI?
Read more of this story at Slashdot.