Governing Lethal Behavior in Robots
Posted 28 Jan 2008 at 22:11 UTC by steve
A new technical report on the design of ethical autonomous military
to us from an usual source, security expert Bruce
Schneier's blog. Coincidentally, Schneier just won the
Wiener Award for outstanding contributions to social responsibility in
computing technology. The report in question is "Governing
Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive
Robot Architecture" (PDF format), from researchers at the Georgia Tech Mobile Robot
Lab. A formalized ethical representation and control structure is
described. Readers may also be interested in a more recent publication
from Georgia Tech, "Lethality
and Autonomous Systems: Survey Design and Results" (PDF format), a
survey of the general public, military, roboticists, and policy makers
on ethical issues of autonomous military robots. The results show that
most people believe human soldiers should be held responsible for their
lethal mistakes. However, when an autonomous robot makes a lethal
mistake, no one seems to know who to blame. Many blame military
commanders, politicians, and
roboticists, with only a few blaming the robot itself.
Those surveyed also felt that military robots should be designed to feel
sympathy and guilt but not anger.
Not again...., posted 2 Feb 2008 at 05:36 UTC by TheDuck »
Do we look for these stories just to get me going?
We'll get the human one out of the way quickly. You are told people are going to kill you. The guy who gives you instructions tells you to kill the other guy first. That doesn't seem unreasonable. Whether you like war or not, the other guy isn't listening to your Personal Ethicist. So a lethal mistake is a mistake. In war there is high risk, and so the mistakes are also dire. But it's still a mistake, i.e., unintentional consequences to a situation fraught with high stress and imperfect information. If the soldier just knowingly kills people, for no reason other than he can, that's obviously a different animal.
So let's say the soldier comes across two people playing cards. He kills them. Whom do we blame? Military commanders? Politicians? Roboticists? The gun? The gun manufacturer? No matter what tool we choose to kill someone with it was still someone responsible for employing the tool in this manner to perform that action.
And don't worry, the gun felt both guilt AND sympathy. It's a very progressive gun. WHY does the gun need to feel guilt and sympathy???
My wife brought up something last night as well. Even if our screwdrivers, guns, or ED-209s are imbued with feelings, they are programmed responses. They are not actual "feelings" but a patterned response programmed to give us the ILLUSION of sympathy or guilt.
Now, one might argue that we do that same thing. That's a whole other conversation (and one that I would love yet would prompt my wife to beat me repeatedly with a stick...covered in broken glass...and rusty sharp metally bits...ok, you get the idea).
It's just a different way of looking at things. If a robot is just a
tool like a screwdriver, it would make sense to treat it like one. But
if a robot is an autonomous being, it also makes sense to treat it as
such. At present robots are somewhere in between. In this case, the
roboticists were not talking about robots in the sense of industrial
really are little more than automatic machines. They're talking
about autonomous robots - robots that make their own decisions in
much the way humans or other meat-based animals do.
You could still argue that autonomous robots are being used as a "tool"
but only in the same sense that a General uses an autonomous human foot
soldier as a "tool". The tool analogy isn't really helpful when you're
talking about a robot as an autonomous agent.
If you don't think it will ever be possible to make robots that are
fully autonomous agents in the same sense that humans are, that's a
whole different matter. That's what I understand Rog-a-matic's (and
to be on this sort of thing. Maybe humans have some magical property
that no other type of machine, particularly one made of metal and
silicon, can duplicate. In that case, our ethicists are still having a
rational argument but about a thing which cannot exist (much like
debating the ethics of Middle Earth). I haven't seen any evidence yet
that meat-machines have special properties that can't be reproduced in
other substrates but only time will tell.
Precisely, steve. The presumption is that robots will be something further along than "merely autonomous". Autonomous seems to currently mean that the robot meets the objective(s) of its programmer without explicitly being told to do them by the programmer. In other words, the programmer has thought of as many conditions as possible and recorded the response the robot is to have in those situations. Even neural networks are the same. The programmed response is just less precise to accommodate less precise inputs. For example, the Roomba programmer clearly didn't think to program any avoidance measures to random attacks. If my family, cats or dogs get in the way, the Roomba bangs into them and they scatter.
I also think any cognitive properties of meat machines (I now call my wife and kids that, by the way :) can be duplicated. However, I haven't seen any real advancement in that direction. Even if there was, then the answer lies before us. If the presumption is that the robots will be like us, then how do you imbue ethics in humans? Not easy, apparently (and, yet another whole discussion). Given that there seems to be some problem with it, then you might say the approach is to somehow remove the negative aspects of the problem. A robot lobotomy, if you will. But then you are back to "automatic machines". You are just "negatively programming", that is, selectively removing behaviors, rather than adding specific behaviors. It comes back to the operator or programmer again.
So I think the time is better spent checking the premises. As you say, humans can have rational arguments based on faulty premises. Unfortunately, much of the population assumes that a rational argument means the underlying premise is correct. There are many out there, global warming being the worst offender.
But imagine if we put this much effort into better understanding this question! Some truly impressive conclusions must result that we can take action on. But be careful what you wish for. As soon as you give someone (something) the ability to set its own goals and plans for achieving those goals you have, by definition, lost control. You must then convince the thing that ethics are good.
Show me how you imbue ethics in other tyrannical people and you have your answer. This isn't a "robot question".
One of the interesting things about humans is that it appears no one had
to convince us that ethics are good. The latest research suggests human
brains evolved what Pinker referred to as a "moral instinct" for
fairly practical reasons. Ethical systems can be manipulated
by powerful institutions like religion, different cultures weight
their morals differently, but everyone without brain damage seems to
have them, even if they were raised by the proverbial wolves.
Robots are a different story since they are being designed rather than
evolving. I think this is why we see the debate about the role of
scientists with increasing frequency. Imagine if DARPA could engineer human
soldiers with no fear, sympathy, or remose - should they? That's the
position we may find ourselves in with robots.
If it turns out sentient robots are an impossibility, maybe it will all
boil down to the tool argument again but, if not, it's probably a good
idea to consider the consequences.
Regarding the meaning of autonomous, I understand it to mean an
agent that acts on its own whereas an automatic refers to an agent which
acts because of outside forces such as a purely deterministic (in
a mathematical sense) controlling program. A lawn
sprinkler might be a good example of an "automatic" machine. An amoeba and a
human could represent the range of things usually considered
autonomous. I'm sure there's overlap in the
middle. Maybe it's even a continuous spectrum from automatic to
autonomous. That's what leads to a lot of the confusion. We use the
word robot to describe agents of both types these days. A wind-up toy
robot is clearly automatic, whereas the fictional Mr. Data is clearly
autonomous. A reasonably advanced, subsumption robot like the Roomba,
well, maybe it's in that grey area somewhere
between the two?