Yes, we know there are really four laws, not three. Yes, we know Asimov intended the laws to intentionally produce logical conflicts and interesting stories. Still, any time you get a lot of scientists together to discuss robots and ethics, Asimov's Laws of Robotics come up. And, as pointed out in Wired's Gadget Lab today, the latest paper on robot ethics is no exception. The paper is Toward the Human-Robot Co-Existence Soceity: On Safety Intelligence for Next Generation Robots (PDF format). The authors, Yueh-Hsuan Weng, Chien-Hsun Chen, and Chuen-Tsai Sun do understand what Asimov intended and do not take his laws as a serious basis for robot ethics. But they make some interesting observations about them anyway such as pointing out they are fundamentally unethical:
by Warren’s definition the robots with Human-Based Intelligence could be seen with moral standings, and it’s immoral to force robots with Human-Based Intelligence to obey Asimov’s [second law] to ”serve people” such as Ian Kerr said earlier. As [for] the robots without moral standings, Anderson introduced Immanuel Kant’s consideration that ”humans should not mistreat the entity in question, even though it lacked rights itself”, he argued that even though animals lack moral standing and can be used to serve the end of human beings, we should still not mistreat them [also allowed under the second law].
Aside from the paper's take on Asimov's laws, it goes on to address the more serious real world questions presented by both hypothetical conscious, autonomous robot of the future as well as by today's simple automated robots. They discuss whether any robot today or in the future could be given a formalized system of moral values. They also address the question of how to create a legal structure to handle ethical issues arising from human-robot interaction.