Evolution of a Winged Robot
Posted 16 Aug 2002 at 05:13 UTC by steve
on a recent research project by Krister
Wolff and Peter
Nordin of Chalmers University of Technology in Gothenburg, Sweden.
The researchers used evolutionary programming algorithms to develop
flapping techniques in a winged robot. In addition to the obvious
robotics applications, the exercise demonstrated that viable flying
motions can evolve naturally.
Learning?, posted 16 Aug 2002 at 07:58 UTC by robodave »
This sounds really cool. I'm guessing that they are going to apply the
learning algorithm to some of the biped robots they have, like Elvina
and maybe Priscilla when that one is done. Instead of pattern bipeds
like Asimo and SDR-4X, perhaps there will be one that goes through a
learning pattern of gait generation. I'd seen some simulations of biped
learned walking somewhere, but can't remember where.
I guess some of you suspected I would cough on this one :) Steve, Did
you post this just to bend my encoders?
Let's see, build hardware which is designed to succeed at some task,
write a program to try various pre-picked, pre-written subroutines using
the random number generator built into the compilier which was written
and tested by a team of programmers, add an if-then statement to some
sensor that was designed to determine if the machine accomplished the
pre-selected goal which was concieved by the designer, keep track of the
options tried and their success/failure score. When one works, write
down the time it took and proclaim victory! Oh, forgot to mention that
the creator of the project had to also provide power during this experiment.
Many of us hobbyist did projects like this 23 years ago with Erector
sets and 8080-based assembly language programs. But we were naive and
didn't have axe grinding equipment or a slashdoting to to put on our resume.
Evolution? No. I thought evolution required that order and
intelligence come from nothing? So go ahead and impress me, go to a
local junkyard and collect random parts, you can stare but please don't
touch. When it learns to fly, then let me know. Ok, not fly, I'll
accept a small fart, the wispy kind that don't smell much :)
If this is evolution, it works pretty well with an intelligent designer
behind the wheel ;)
Ahhh, the sound of flame throwers being loaded throughout the land.....
Oops, posted 16 Aug 2002 at 23:31 UTC by steve »
Heh... actually it didn't occur to me at the time that anybody would
have a problem with it. With respect to the evolution vs. creation
thing, it's been discussed to death a million other places, so I don't
want to rehash that here.
But with respect to the experiment itself, I don't think you could
successfully argue that the flying algorithm which evolved in
this case was pre-picked or pre-written. Genetic agorithms and other
types of evolutionary software work in the same way evolution in the
real world does (or, is believed by some to work, if you prefer). The
source code for this experiment may be available for examination in
which case you could prove conclusively whether or not it was rigged. If
it's not available, you can find other genetic
and evolutionary algorithm packages on the net that are open source,
fully documented, and still produce real results. There's an old
proverb: If it happens, it must be possible. ;-)
I thought it was really silly how they claimed evolution took zillions
of years, but their experiment did the same thing in three hours. Give
me a freaking break. These scientists got the output they expected.
This was not really emergent behavior from nothingness, it was epected
results from choosing the best pattern. If they really wanted to
reproduce evolutionary behavior (which I don't believe they can), they
would have to just create a innumerable bunch of neurons and see what
behavior happened. For that matter, they would most likely have to make
zillions of these creatures hoping with a vast inprobability that within
their lifetime one would perchance type the works of shakespeare much
less figure out how to fly. This experiment was very contrived and
their statement bogus. Instead what they did was they gave critter
instructions on what to do, what goals to attain, and golly gee wiz in
three hours it did it. That's hardly reproducing evolutionary behavior
that they claimed they did in three hours that the wourld couldn't do
for zillions of years. Like comparing apples and oranges I'd say.
Perhaps seperating out the hype of the author of the article from the
work the researchers had done might be more productive. The use of the
descriptions of "evolution" and the story presentation to produce
something readable and understandable to lay folks, while furthering a
supposition of the evolutionary model is the "journalists" take. But if
you look at the possibilities that the researchers Wolff and Nordin are
working on, disregarding their choice of terminology, it could be a
pretty good experiment. Perhaps others have done this research before
in other forms amd with other hardware, but have they bothered to write
about it, where others could learn of their results without having to
reinvent the wheel? I know, part of the fun of building robots is
learning how different objects work together.
I read an article a year or so ago in EE Times about a researcher who
wrote a program to randomly program an FPGA, with the intent of
evolving an oscillator.
As I remember it, after several (thousand?) iterations, it did indeed
produce an oscillator. However, on inspection of the FPGA's cells, it
was an oscillator of bizarre construction, and a different FPGA
wouldn't necessarily oscillate from the same firmware, since the
evolved circuit relied on parameters that would vary between FPGAs.
I think Attila and Ghengis used a similar method to learn to walk (this
was discussed in an issue of Scientific American magazine).
We did a story on the
FPGA back in 2001. The link to the news article has long since gone dead
but the link to the Adrian
Thompson's home page is still good and he has quite a few papers on
the subject of artificial evolution of electronic circuits and robotics.
Also there's a very brief summary of the FPGA project here.
The cool thing about the FPGA project is that it evolved a circuit
layout that was extra weird - it not only worked but initially no could
why it worked. There were groups of cells in the FPGA that were
completely unconnected to the main circuit but if they were removed the
system stopped working. If I remember correctly, it was eventually
determined that the evolved
design was using the RF interference created by the operation of the
isolated group of cells to influence the timing of the main circuit. The
main circuit included a group of cells that were connected in a way
which made them very sensitive to RF interference. The whole RF
interaction was the part that probably wouldn't work if you moved the
design to a different FPGA chip.
There are many similar stories of evolutionary techniques in software
design, where the resulting software succeeds at accomplishing a task, but
the researchers are initially unsure how because the methods being used
are unusual or new. There is the now famous example of Thomas Ray's
artificial life experiment. The genetic algorithms he was using
to create the "genome" of his software-based lifeforms stumbled onto the
code optimization technique called "unrolling the loop" which, while
well-known in some computer science circles was unknown to Ray at the
time. One of the interesting things about most genetic algorithm
projects is how often the evolved designs are completely
unexpected by the researchers doing the projects.
I think one of the things that confuses people with genetic algorithms
(or evolution in general) is the use of randomness as an input to a
fitness function. Some people seem to have an emotional reaction to
"randomness" and never see the part about "fitness" (I guess this is
where the analogies to tornadoes in junkyards come from?). I saw a great
genetic algorithm tutorial once that ran two simultaneous examples of
evolution - one with random input and a fitness function to pick
survivors and one with just random input. The goal was to spell out a
particular sentence. The random input was a string of randomly selected
characters. The simulation with the fitness function reached the goal in
a fairly small number of iterations (hundreds or thousands) whereas the
one that just relied on randomness never did even after running for
millions or billions of iterations - because the odds against it were so