Francisco Chronicle article offers insights into discussions among
roboticists and computer experts at Stanford's Technology in Wartime
conference. The researches have gathered to "consider the ethical
implications of wartime technologies and how these technologies are
likely to affect civilization in years to come". Among the topics
discussed was whether scientists can create war robots that behave more
ethically in battle than human soldiers. One scientist recommended that
roboticists who are asked to work on a military project make sure they
the goals of the organization and that they will be able to publish
their research. Ronald Arkin
noted that the Pentagon is determined to
create war-fighting robots, so it's up to socially responsible scientist
to make sure the robots are given ethical self-control. Peter Asaro
countered that "scientists should not dignify what he considers the
naive notion that robots can be programmed to kill, but only in an
ethical fashion." He believe robots should not be allowed to make
autonomous "killing decisions".
Is this being argued by the wrong people? Folks that fire up wars aren't "socially responsible" in the terms I imagine are meant here. To pare down the problem a bit, anyone who decides that they should achieve their goals by forcibly coercing or denying another person's liberties has tossed ethics out the door. Even worse when they want to do that with lethal force.
So take the cliche scenario of Mr. Balaclava pointing a gun saying, "Give me your money or I'll kill you." The most "socially responsible" person is going to wait for confirmation of the attack and do everything they can to eliminate the threat once it is realized. The rest of us will take his word for it and do something now (whether it's give him the money or fight back, depending our assessment of the situation).
So some amount of force is warranted when you are under attack. I guess the ethical component of this is how much force and what defines being "under attack". And we can't get away with saying just have the robot shoot back at whatever shoots at it. Then you could just walk by them.
So it gets back to http://robots.net/article/2456.html and my comments that there isn't an "ethical" solution that we can reliably implement. Ethics are something we humans have somehow come up with. And once you manage to figure out how to give robots the ability to think for themselves you have lost the control this depends on.
It's all very well for academics and philosophers to ponder the ethics of war, but once war does actually break out ethics quickly goes out of the window. The military are not at all concerned with ethics. Their job is to fight and win battles within the rules of engagement.
That said, it is the duty of roboticists to try to maintain high standards, and to avoid getting involved with projects which might be considered unethical and bring the field into disrepute.