The Boston Globe Online has an article
summarizing the shift in thinking of some AI researchers that
interaction with the world is required to achieve intelligence. Rodney Brooks, the
current director of MIT's AI Lab
believes (not suprisingly) that this
interaction will be achieved through robotics. The article doesn't
really tell us anything new but it is a good summary and mentions the
usual suspects such as Kismet.
If all a robot gets is electronic signals from sensors, how would a robot know if it were getting true data or virtual data? If virtual data
be faked (ala liza program) then AI could be achieved through virtual methods???
How do we know that we ourselves aren't just a brain in a jar somewhere receiving fake signals? Are we just a dream that will cease to
exist when whoever wakes up or when Jordie shuts off the holodeck?
I thought it was interesting a while back to see a show that explained genetic algorithms. There were two robots in the simulation, a
shepherd robot and a sheep robot. The shepherd robot would need to guide a sheep robot into a pen by pushing the sheep a certain
direction. The shepherd would need to go through thousands of trial and error methods to get the most effecient action to direct the
in a straight line to the pen. Of course to do this in the real world would have taken months. So they did it virtually on a computer using
virtual robots. Of course in a virtual simulation, they can speed up the simulation and so it only took a few hours to figure out the most
effecient method genetically. The next morning, they ran the best simulation parameters on the real world robots and viola is worked just
like it did in the virtual world.
So if this is the case that computers can figure out AI faster in a virtual world, then theretically how can this article be true that robots
need real world interaction to achieve better AI? Now granted there are unexpected error sets that a virtual world will not encounter, but
all intents and purposes most known error sets could be programmed into a virtual world and allow a virtual AI to generate better AI.
Computers in virtual simulations of situations could figure out the best choice of action like planning out a chess game.
Now at some point such virtual processes won't be good enough or will be too much trouble to program, and a computer will need to
interact with a human to understand them. I don't believe real AI is anywhere near that yet. Taking into account again the chess game,
the computer learned the stratagies of the chess masters so that they could beat them. However, what prevented it from gaining that
knowledge virtually without the aid of the chess master for research. In fact, the chess master's information was limited. If the computer
would have figured out all possible variations on it's own, it would have been better. The only thing the computer gained was not
knowledge or wisdom but a shortcut to know what the chess master already knew.
Anyway, I guess at some other point humans won't be enough data input for these devices and they will transcend the knowledge and
wisdom of humans and then exterminate us! Like we need that?