Patent Granted on Laws of Robotics
Posted 16 Aug 2003 at 14:35 UTC by steve
Just when you think the US Patent office has reached the pinnacle of stupidity and
they suprise you by achieving a new extreme of
insanity. Move over software patents, gene patents, and business method
you can patent systems of ethics. In July, 2003, a patent
was granted for "The Ten
Ethical Laws of Robotics" to John E. LaMuth,
a family counselor and author of self-help
books on ethics. LaMuth's "holistic theory" of ethics reads like a
mix of greek philosophy, freudian psychology, and new-age psychobabble.
But maybe it will be profitable. Think of all the patent royalties
Moses could have collected by
now on his 10 Commandments or the lucrative lawsuits for competing
religous sects suing each other over patent infringment. Of course, the
most obvious question here is why Asimov's
4 Laws of Robotics weren't considered prior art. For more
on LaMuth and his patent see this Lucerne
Valley Daily Press article.
Be careful what you think, it may be patented.
Patents in general are a good idea, but some of them are trivial or
just plain crazy.
That is plain idiotic.
But anyone developing AI will now avoid all these idiotic patents.
So it looks like Matrix or Terminator scenarios in the future.
If someone patents ways to tell what's right from wrong, who's going to
program them in, if you have to license them?
Unless the license is free I sure as heck ain't going to use them.
Kinda off topic, posted 18 Aug 2003 at 19:36 UTC by jstrohm »
I really have an issue with the Asimov's Laws of Robotics. Maybe I'm
wrong but it seems to me any robot with the intelligence to be able to
interpret these laws would be intelligent enough to pass the Turing test
and have a "human like" mind. Your average "do a task and repeat" robot
is never going to be able to apply these rules.
The first problem I have with these laws is that I don't think it is
feasable to program these laws into a computer. I don't like saying
"can't" but I just think that for a program as sophisticated as a true
AI programming these rules would not be trivial.
Secondly I see these rules as having major ethical issues. If I took a
human and "forced" or "brainwashed" these laws into them it would be
considered unethical. So why is doing the same to a true intelligent
machine any different? I guess it get's into robots rights, but I think
we are a long way away from any of this.
Not so off topic, posted 18 Aug 2003 at 20:33 UTC by steve »
Actually, the problem of the robot being smart enough to grok Asimov's
law is a frequently raised issue. But Asimov's laws are trivial compared to
LaMuth's. Compare Asimov's first law:
A robot may not injure a human being, or, through inaction, allow a
human being to come to harm.
To LaMuth's first law:
As personal authority, I will express my individualism within the
guidelines of the four basic ego states (guilt, worry, nostalgia, and
desire) to the exclusion of the corresponding vices (laziness,
negligence, apathy, and indifference).
Asimov's first law requires at least some definition of what a human is,
an action is, and what sort of actions might harm a human. But it's
probably a within our reach to build a machine that could make some good
guesses. On the other hand, I'm not even sure what LaMuth's law
means, so how is a robot going to figure it out and obey it? What is
"nostalgia" with respect to a robot? What is ego and how would the robot
determine if it had one, much less determine how to "express its
individualism" within one of its "states"? And that's one of his more
comprehendable laws. Later ones degenerate into new-age religious talk
requiring the robot to support "ecclesiastical traditions", the "spirit
of ecumenism", and to "profess a sense of eclecticism".
Asimov's laws act more like a safety on a gun or the guard a power tool
- they were just intended to prevent humans from harming themselves with
an intellgent tool they'd created. (and they're fictional, Asimov
created them to move his plot along in a story, not to use in real robots).
LaMuth's laws look like the result of sloppy thinking by a non-technical
person. They are what Douglas Adams would call a load of dingo's
I agree, by the way, that there could exist a level AI at which such
laws would be an unethical restraint on a beings free will (perhaps, as
you say, like brainwashing). But I suspect there is also a level of AI
machine is still a machine and not a living entity and the laws would
act merely as a safety mechanism.
What you brought up about "and they're fictional, Asimov created
them to move his plot along in a story, not to use in real
robots" is my real gripe with Asimov's laws. I read news
articles and such with often cite these fictional laws when discussing
I agree that they are great plot devices but I get tired of modern media
confusing the real with the imaginary. But I guess this is a complaint
about our news services in general and not about Asimov's laws. Grrr. .
. I think I'm just in a bad mood today but at least this stuff's making
Back to the topic of LaMuth's patent I think you are right, if a human
can't understand it we can't expect an AI to understand it either. Kind
of a Turing test in it's own right.
Asimov himself would probably agree that the 3 laws would be hard
to "code" into a robot. Many of his stories dealt with the dillema of a
situation the may or may not fall into the realm of the three laws...
Licensing terms, posted 21 Aug 2003 at 21:35 UTC by tafkaks »
That is plain idiotic. But anyone developing AI will now avoid all
these idiotic patents. So it looks like Matrix or Terminator scenarios
in the future. If someone patents ways to tell what's right from wrong,
who's going to program them in, if you have to license them? Unless the
license is free I sure as heck ain't going to use them.
Actually, have a look at the patent (or the summary of the laws quoted).
You're not likely to ever need to license these -- this guy is pretty
obviously a crank. Apparently the author of the news piece on him never
bothered to actually have a look at his patent. The post is interesting,
however, as it shows just how dysfunctional the patent office has
become. This patent almost (but not quite) exceeds my favorite patent,
6,368,227 for sheer frivolity.
Time for me to patent that new FTL drive design I've been working on.
You can build it using only a cup of old coffee grinds, a few springs,
some american cheese and a microwave oven.
I wonder how much a license costs?
Actually, have a look at the patent (or the summary of the laws
quoted). You're not likely to ever need to license these -- this guy is
pretty obviously a crank. Apparently the author of the news piece on
him never bothered to actually have a look at his patent. The post is
interesting, however, as it shows just how dysfunctional the patent
office has become. This patent almost (but not quite) exceeds my
favorite patent, 6,368,227 for sheer frivolity.
If you don't believe in software patents, whether you're a European
or not, there may be something you can do tomorrow that COULD help make
You can join the Online Demonstration Against Software Patents.
All you have to do is replace the home page of your web site with a
page declaring your opposition to software patents. If you don't want to
completely replace your home page, why don't you just add a prominent
statement declaring your support for this cause.
There's more information, including sample pages you can use, at http://swpat.ffii.org/group/demo/index.en.html.
This demonstration is timed to coincide with a protest outside the
European parliament on Wednesday,
but even if you're not European, please consider doing this to show
solidarity. The more web sites throughout the world that support this
protest, the more effective it will be.
Here's a quote from the FFII web site about this:
The idea is that with software patents many sites running/serving
possibly patent infringing software have to go offline sooner or later.
So why not demonstrate this effect before it's too late?