All the news that's fit to assimilate[ Home | Blogs | Events | Robots | Humans | Projects | Podcasts | About | Account ]
Click on an image to enlarge it
|Target Environment||Locomotion Method|
|Sensors / Input Devices||Actuators / Output Devices|
|Video camera||DC drive motors|
|Control Method||Power Source|
|CPU Type||Operating System|
|Time to build||Cost to build|
|URL for more information|
|Susan & Kino's Intelligent, Most Excellent Robot
SKIMER is used to test out ideas in visual navigation. Basically the goal was to see if one could build a cheap version of the CMU driver ALVINN which learns to drive by watching a human driver.
SKIMER uses a lot of memory to implement a visual connectionist scheme called WISARD. SKIMER is not so much programmed as trained. The user drives SKIMER through a task and it associates visual images with commands. When SKIMER sees an image it classifies the image and executes the command that is remembered.
SKIMER is a simple model of a visually reactive pilot. Higher levels would select modules that had been trained for specific tasks. At one DPRG meeting we trained SKIMER to go around the room using its vision. Then it met Rogers D-BOT. It was confused for a second, but then decided that it shouldn't run over a fellow Bot and continued its task, even though it had no previous experience with other Bot's. It continued its traversal until it reached its goal.
SKIMER has a Camcorder as the head and only sensor. The image is captured by a AT&T Targa frame grabber from a previous project. The video also goes to a short range UHF TV transmitter. The CPU is a 386/40 DX with 4 Meg of RAM and 60 Meg hard drive mounted for shock. The base is a six-wheel all terrain toy like D-BOT.
Although SKIMER was not the fastest robot to complete the DPRG test, it was the only one to use the visual markers present. Within minutes SKIMER can be retrained for different tasks.
Like most Bot's owners SKIMER creators have constant plans for upgrades and applied reasearch but never have enough time. Upgrades would include new visual algorithms, sonar and better motor controller.