A new research paper
written by COGS
researcher Julian Togelius
combines the ideas of evolutionary computing,
neural networks, and subsumption architecture. A simulated robot using
software based on Julian's ideas was able learn a series of behaviours
through a multi-layer evolutionary process with multiple fitness functions.
suggests that a layered evolution approach may solve the chief problem
of evolutionary robotics: scaling the software to the point that it can
solve complicated, real-world problems. Julian explains layered
evolution and differentiates it from incremental evolution and
Despite the initial claims there doesn't seem to be any evidence in the
paper that this method can be scaled up to much bigger problems.
Obstacle avoidance and phototaxis are very typical easy problems which
many robotics researchers like to go for, and in this case it looks like
only a few simulated neurons were used.
I agree motters,
There are a lot of AI problems that work well with small data sets. But
in almost every case, once you get beyond a certain quantity of data,
intelligently partitioning the data into something that works frequently
breaks the technique. For example, the "collection of experts" multiple
neural network technique.