Science

The Ethical Dilemma of Killer Robots

Posted 19 Jan 2007 at 23:59 UTC by steve Share This

An AustralianIT News article discusses the ethical problems raised by robots that are able to autonomously make the decision to kill humans, such as the new Samsung gaurd robots being deployed along the northern border of South Korea. The $200,000, vision and gun equipped robots are said to be able to autonomously choose between sounding an alarm or opening fire on anyone who doesn't correctly respond to them with a predefined password. Ethicists point out that these robots won't (and can't) be held legally or morally responsible for their actions. For more, see our previous story on the Samsung border robots being used in Korea.


Machines their own moral agents?, posted 20 Jan 2007 at 03:36 UTC by Rog-a-matic » (Master)

For now, a human will have to take responsibility for the actions of the human-created machine until a court decides that the machine is its own moral agent.

Most of us accept that humans are responsible for their actions and worthy of punishments such as prison. I fear that the question of whether a machine is a valid moral agent separate from its creator might be confused by behavior that is emergent or otherwise unexplainable due to complexity of the data used for decision making.

I'm guessing, like other philosophical subjects, there will end up being two major camps of thought - One claiming that these machines are similar enough to humans and therefore should be punished, relinquishing their creator from responsibility, and one claiming that machines are just complex machines and the responsibility belongs to their creator. A third position might be a completely materialistic view that humans are just complicated machines and neither should accept responsibility.

Legally, I see no reason to distinguish between machines (possessing computing power or not) and robots. A simple lamp fixture can be designed to kill a human at a specific time and under specific circumstances. I don't see any reason to create a special group of laws to deal with a court-derived definition of a robot, but I'm sure there will be just like special groups of laws dealing with publishing depending on the media used to deliver it.

Questions about the responsibility of the actions of 'robots. will probably occur before a machine autonomously selects a human target for termination. One example that comes to mind would be a remote controlled or pre-programmed robot that commits a crime such as stealing.

Rog-A-Matic

Responsibility, posted 20 Jan 2007 at 07:34 UTC by steve » (Master)

I think the source of ethical dilemma is the responsibility you mention. Where does it come from? Most systems of morals and law (as far back as Aristotle and Augustine) conclude that the responsiblity of an agent comes from free will. If the agent acts voluntarily, as does as human, it bears responsibility for its action. If the agent does not have free will (your lamp fixture for example), it can't bear responsibility for action. Rationality is also generally required. In addition to acting voluntarily, an agent must understand the choice and the action to be fully responsible for it. So, a "crime of passion" or temporary insanity might be a defense against responsibility.

Another word for free will that's more familiar to robot builders is autonomy. Autonomy isn't a simple black and white thing. Your lamp is at one of the scale and a human is at the other. A virus is probably closer to the lamp. How about an insect? It clearly has some freedom but not enough that most people would consider it morally responsible. And an insect doesn't have much in the way rationality. A mammal? In some cultures they've been held to be morally responsible for their actions. We punish pets such as cats and dogs for certain actions, so clearly they're autonomous enough to bear at least some responsibility (on the other hand, the law usually considers the owner or trainer responsible for the animal's actions - there's still a difference in the level of rationality between most animals and most humans).

The most advanced autonomous robots today are roughly equivalent to very simple insects, so nearly everyone would agree that a robot like the Samsung border guard doesn't bear responsibility for its actions any more than a bee or an ant would. But as their level of autonomy (or freedom) continues to advance, the issues become more complex. Think for example, of a robot with the complexity, autonomy, and rationality of, say, Star Trek's Mr Data. Would he be morally responsible for his actions? Science fiction has been considering moral dilemmas for decades that are only now occuring to ethicists.

Unless one accepts that machines made of meat possess some quality that can't be duplicated in machines made of other materials, there's no obvious reason to believe robots could not eventually reach the same level of autonomy and rationality as humans and thus bear similar moral responsibilites. Whether or not any of us will live long enough to see robots that advanced is another question...

Agents and machines, posted 20 Jan 2007 at 21:20 UTC by Rog-a-matic » (Master)

Very interesting topic!

I'm thinking more about the ultimate issue than the interim problem of assigning responsibility for actions of a machine.

Using the reasoning you mentioned: If machines eventually match or surpass the processing complexity of humans (and they have already in many ways), and the only difference is 'meat' vs 'silicon', then machines must be dealt with just like humans if our reasoning is to be consistent. That not only means robot prisons, but robot rights, legal status, etc.

My position would be that humans are not only biological machines but that they also posses an internal agent separate from the 'flesh' of their bones, muscles, and brains. It's this internal agent (spirit, soul, whatever) that currently distinguishes humans from machines, and I believe will continue to do so in the future. The internal battle that we humans fight each day between this agent and our flesh is obvious.

Don't get me wrong, I have no idea exactly what this spirit agent thingy is or where it resides (if it does in physical space). Humans at one point didn't know about, and still have much to learn about, gravity, photons, air, and other things that were once thought to be beyond the physical, so I hope we don't consider these matters off limits. This discussion might scare some because it ventures boldly into the territory of religion and the idea of a creator, but to reason this through with an open mind and without barriers, we must go there.

Rog-A-Matic

Robots and Parenting, posted 20 Jan 2007 at 21:26 UTC by MDude » (Journeyer)

If the machines are just machines without the capacity to consider the moral implications of their actions, then the manufacturers are responsible for the mistakes made by the robot, assuming it wasn't from lack of maintinence. If the robots are "people", then the manufacturer is creating sapient creatures for the purpose of sending, no, SELLING them off to war, even if the robots do as they're supossed to. If I was making robots for the military, I think I would want to find out if they were people BEFORE programming them to kill, not after.

You can't sue a Toyota Prius, posted 21 Jan 2007 at 01:40 UTC by The Swirling Brain » (Master)

I recall that the Toyota Prius can parallel park itself! The thing that I wonder is that if it were to go into self-parking mode and caused an accident, whose fault would it be? The car's fault? The owner's fault? Toyota's fault? My fault for not putting a stop to this insanity sooner? San Adreas' Fault? God's fault? Well I guess all are at fault to some degree. No one is 100% NOT at fault in anything, right? Or are they? Who do we blame?

To me all this fault finding leads to an earth-shattering crevasse in our thinking! There is a huge casum between what or who is to blame and why do we care. For ethics, everyone wants to play the blame game, and sue one another for a buck or what's fair. So really, for them, it's all about money! Face it, life's not fair and never will be. If a self-parking car runs over you and you die, face it, you're dead and no amount of blaming will bring you back. Now if you live on the other hand, great, sue the owner of the car, sue Toyota, sue your parents for having you, etc and you will make a lots of money! However, suing the car will get you nothing so don't bother doing that. You might get some salvage value, but that's another story. So really, the question then would be, is it ethical to live and sue everyone or just go ahead and die and spare everyone a lot of financial burden? For blame can be found in everyone and anything that can be sued by the right lawyer.

Now, when the car starts getting responsible and getting a job and making money, then you can start to sue the car! If you have a decent lawyer. Then, at that point, you would surmise that the car just became sentient? Well that's all good and all but when you sue the machine, you still will get nothing even with a good lawyer. Why, because the law handles cases against machines differently and you will certainly lose. Why? Because the cards are pretty much stacked against you in life in everything and you are sure to lose. Haven't you figured that out already? Or are you just daft? I mean, look at those poor defenseless cars and you you overbearing hateful human always wanting to sue everyone. So, is it ethical to sue a car? Not if you're not going to get any money out of it.

Let's explore whether humans have souls and cars don't. What is a soul or what's a spirit? I'm not sure anyone really knows. It's intangible and undefined clearly in the Bible so no one really knows. Is it man's program? Or is it man's body when it's quickened? Is it something that's physical or something beyond this plane of existence? Who really knows? Is it here, or there, or anywhere? Hmm. I'm not sure we'll ever know what it is. So, if christianity is true (which I personally believe it is), then our "selves" will return to God when we die, and the saved with gather with God, and the lost to the other place. Cars, on the other hand would not have that ability since humans didn't put that into them. So in that aspect cars wouldn't go to heaven. Now perhaps if we programed the car's black box to upload to the central server when they crash, then I guess they would go to a so called car heaven if we so engineered and programmed that? But to God's heaven, I don't think we know how to program that to happen so I don't think that would happen if they ceased to work. So, we could probably safely say that cars wouldn't go to the place where humans go, and most likely vice versa.

So other than cars not going to human heaven, what does a spirit or soul have to do with robot cars? I'm not sure? But ethically, if they ever do become sentient and if they ever do have lots of money that I can aquire by sueing them, I think it would be ethical to do so. I mean, if you can place blame squarely on the car, you should win the case, right? At that point, the car, especially if it's an automatic, will try to shift the blame onto the owner or the manufacturer or to you or your parents, and so your chances of winning such a case are greatly diminished. Especially if the car has a good car lawyer. In that case, again, you probably won't win as the car probably has a good point, right? I mean, really, it probably is more someone else's fault after all than the car's for running over you. You being neglegent and all for not watching out for self-parking cars.

Moral of the story? In ethics, "ethics" doesn't really matter, it's whether you can sue and make money that matters. Isn't morality and ethics absurd and useless these days?

Really, though, what I think is that for a self-parking car, the person who pushes the button to enact the self-parking mode should take responsibility for that car's actions. If a robot were to become sentient and were able to accept responsibility for its own actions and be insured for its own actions, then it would be the robot's fault and the human's fault who enacted it. So yes, they'll both be sued. I don't believe that as long as humans are living that a robot alone by itself can be entirely at fault.

Whose fault is it ?, posted 21 Jan 2007 at 13:22 UTC by motters » (Master)

On the topic of self driving vehicles (as in the Urban Challenge) I think it's exactly these legalistic problems with attribution of blame if anything goes wrong which will be the main hurdles which need to be overcome if we are eventually to have self driving cars on the streets. I think the technical problems will be solved long before the social/legal ones.

For this reason the first areas to see mass automation of transport will probably be relatively remote, where there are few traffic regulations (or what regulations exist aren't obsessively enforced as they are in some of the more advanced nations), such as China or perhaps Africa.

Fun to think about but..., posted 22 Jan 2007 at 11:34 UTC by dogsbody_d » (Master)

IIRC at present the border is covered in landmines. These'll kill you without asking for a password. Mind you, since they're robot suicide bombers, there's nothing left to sue.

Our Souls (the lot of you), posted 22 Jan 2007 at 11:41 UTC by dogsbody_d » (Master)

Since any two drunken teenagers with genitals to rub together can force God into letting them spew a new soul into the world, why can't Toyota?

Mind you, I just lost a shitload of respect for some of you.

See more of the latest robot news!

Recent blogs

25 Jul 2014 mwaibel (Master)
20 Jul 2014 Flanneltron (Journeyer)
11 Jul 2014 shimniok (Journeyer)
3 Jul 2014 jmhenry (Journeyer)
3 Jul 2014 steve (Master)
2 Jul 2014 Petar.Kormushev (Master)
10 Jun 2014 robotvibes (Master)
10 May 2014 evilrobots (Observer)
2 Mar 2014 wedesoft (Master)
1 Dec 2013 AI4U (Observer)
X
Share this page