Military Robotics

Rules of Engagement for Armed Robots

Posted 16 Apr 2007 at 20:07 UTC by steve Share This

"Let the machines target other machines. Let men target Men." So say we all. Or at least so says John S. Canning of the Naval Surface Warfare Center in his recently proposed Concept of Operations (CONOPS) for Autonomous Use of Weapons (PDF format). The miltary is becoming aware of the problems with autonomous robots under the Law of Armed Combat. How do robots fit into existing guidelines such as the Rules of Engagement and Discriminate Use of Force? Who or what can be destroyed by an autonomous robot and under what conditions? Will the robot be able to distinguish a war ship from a cruise ship? Will it be able to distinguish between an armed, determined enemy and a mob of angry but unarmed people? Canning's proposed rule is that only humans be allowed to target other humans, while autonomous machines are allowed to target only enemy machines. In other words, machines may "target the bow or the arrow but not the archer". A DefenseTech blog notes that the catch to the whole thing is that when the robot targets the enemy machine, say a tank, and destroys it, the humans inside are killed as well. The real benefit to the miltary of Canning's legal theory is that humans become "collateral damage" and not "targets" of the robot, presumably lessening the red tape involved in using such weapons.

Interesting reading, posted 17 Apr 2007 at 12:03 UTC by c6jones720 » (Master)

I think that pdf is fascinating. If I had a penny for every time Ive mentioned Im do robotics to somebody and they turn around and say "by developing that technology you are trying to kill people", I'd be rich!

I used to work in bomb disposal and I never had a moral problem with that. However I have never been too sure about the concept of purposely built to kill.

from a moral standpoint, building military robots that only target machines might be the way best forward for that sort of technology.

Idle Speculation, posted 17 Apr 2007 at 16:13 UTC by Nelson » (Journeyer)

At the moment I suspect that the technology is not quite able to reliability discriminate between enemy combatants (IR heat signature, movement, etc) and bystanders/friendlies in close quarters. We are limited to attacking everything and anything that moves (land mines, Korean sentry robot, AMRAMs) or looks "suspicious" (guided cluster submunitions, ...).

However, well before the end of this century I suspect that the tables will have turned. Where war is unavoidable it might possibly be conducted by lightning-fast autonomous vehicles that not only could be much more accurate and reliable than humans, but will also probably have a decided edge because of response times that are thousands or millions of times faster.

And the whole concept of hunting and killing could be primarily a biological strategy that might not make much sense from the perspective of intelligent machines that might feel as threatened by us as we are by slugs...

Robots of Warcraft, posted 18 Apr 2007 at 00:12 UTC by Rog-a-matic » (Master)

I can't imagine anyone having a moral problem with building bomb disposal robots.

Initially, I CAN imagine having a moral issue with building machines that are made to kill other humans, but it doesn't seem to be any different than making any machine for that purpose. A gun would be an example. The ultimate responsibility must remain with the person directing the use of the machine, whether it be a gun, a baseball bat, or a robot. I don't see any way or reason to make a separate category for robots. If we do, then how do we define robot - has a microprocessor, a sensor, an actuator? Modern bulldozers have all of those components. Robots should be lumped together with all other machines, smart and dumb, for the sake of this discussion.

Thinking this through to it's logical conclusion, I can't help but wonder if distant-future wars will be totally virtualized. The side with the best programmers and fastest mouse clicking wins! I doubt it though.

Roger

robots vs tools, posted 18 Apr 2007 at 01:21 UTC by steve » (Master)

I think the difference is that a bulldozer, classic Ted Sturgeon stories aside, doesn't act autonomously. If a robot is acting autonomously, it means, by definition, there is no one "directing the use of the machine". You could compare an autonomous robot to something like a cruise missle but that's a very primitive example. In fact, Canning uses the cruise missle and the autonomous Vulcan anti missle gun as example of precursers to armed autonomous robots.

Autonomous distinction troublesome, posted 18 Apr 2007 at 14:36 UTC by Rog-a-matic » (Master)

Making a distinction based on autonomous operation is troublesome. We could claim a Rube Goldberg machine was autonomous, yet we should still place complete responsibility for the machine's actions on the person directing it. Not the machine, not the creators, not the farmers who feed the directors, etc.

I'm trying to avoid the problem of putting the responsibility on a machine and not on the human directors where it rightfully belongs. I want to see the responsibility for a machine's actions pass directly to the humans involved in controlling the machine, regardless of the technology used.

We don't want the automated lawn mowers of the future to be viewed by the public as a potentially dangerous, independent, moral agent. We DO want them concerned about a hacker who gains control of it.

Avoiding the problem, posted 18 Apr 2007 at 15:23 UTC by steve » (Master)

A Rube Goldberg machine is automated, not autonomous. Both automated and autonomous machines act without human intervention but, beyond that, the words have almost opposite meanings. An automated machine acts without volition or conscious control. It has a director. It is incapable of directing its own actions. An autonomous machine is self-directed. Its actions are not directed by a human.

You and I are autonomous but not automated. We are self-directed. Historically non-meat-based machines have been automated but not autonomous. As this changes, we'll be seeing more stories about these sorts of issues.

All's fair in love and war?, posted 19 Apr 2007 at 22:24 UTC by The Swirling Brain » (Master)

War machines help you win wars. You know, when a war starts, just about anything goes. We hold back with big stuff like biological and neuclear weapons because we figure we can win without them. If we figured we were losing or the other side starts using them then we'll pull out those big guns too. I mean, we could just nuke the other side and get the war over with if world opinion would let us, but that's not kosher so we shoot little pretty bullets at the other side and it takes a little longer instead.

Robot war machines are the same. We pull out the tame ones right now that just do recon or remote missle shooters because that's what we can get away with in world opinion. If we were backed against the wall, though, we might unleash something more contraversial and deadly. I mean, we'd have nasty, gruesome robots operational already if it were ok with everyone.

After a war is over, war trials and tribunals are held for the loser. So if the loser was the one with the war robots, then we will see that robots will be a problem ethically as evil machines made by evil doers. If the winner is the one who has the robots, then the robots will be hailed as the helpers of all mankind to rid the earth of evil doers, regardless of how gruesome or nasty the robots may be.

Steve Taylor: ...in Modern Ethics 101. First day learn why ethics really don't apply...

See more of the latest robot news!

Recent blogs

13 Dec 2014 mwaibel (Master)
3 Dec 2014 shimniok (Journeyer)
14 Nov 2014 Sergey Popov (Apprentice)
14 Nov 2014 wedesoft (Master)
5 Aug 2014 svo (Master)
20 Jul 2014 Flanneltron (Journeyer)
3 Jul 2014 jmhenry (Journeyer)
3 Jul 2014 steve (Master)
2 Jul 2014 Petar.Kormushev (Master)
10 Jun 2014 robotvibes (Master)
X
Share this page