All the news that's fit to assimilate[ Home | Blogs | Events | Robots | Humans | Projects | Podcasts | About | Account ]
I have seen similar methods being used years ago, but with a more complicated binocular periscope type of arrangement. I might try this technique myself at some point.
The only disadvantage to mirroring is that you lose half the resolution per image, and in stereo vision its horizontal resolution that counts (higher resolution means higher ranging accuracy), but some of the modern webcams have quite a high resolution anyway so this may no longer be an issue.
Getting stereo vision systems to work is undoubtedly a complicated business, involving multiple kinds of algorithm. My own system, still in development, can be found here http://code.google.com/p/sentience/
The mirror system suggested by Nelson Bridwell certainly seems to be an interesting solution to simultaneously snapping a pair of images.
I have done some experimentation myself with two FireWire webcams, and the problem I run into when the robot is moving at speed, is that the images are not taken at exactly the same time, which introduces errors.
I wonder, does anyone happen to know if there's a programmatic way of persuading a pair of cheap consumer webcams (either USB or FireWire) to start taking an image at the same instant (i.e. within a millisecond or two)?
This approach is so simple that I assumed that it must already be in use, but when I looked around on the web all I could find was a stereo photography parallel that was developed by Steve Hines several years ago: http://www.hineslab.com/MirrorStereo.html
Most of the inexpensive cameras are VGA (640w x 480h). If you need slightly more depth resolution you can rotate the camera to 640h x 480w. And if you want to try this out in non-real-time at ultra high resolution, try out a photograph with your 7 MP digital camera! The camera-mirror alighment process is trivial compared to the Leica rangefinder approach.
For most stereo matching algorithms the Firewire cameras produce higher quality uncompressed images that do not wreak havoc on sensitive feature detectors. I use the Unibrain Fire-I board camera http://www.unibrain.com/index.html with the CMU 1394 Digital Camera API (Windows), which gives you very complete control and works with just about any Firewire camera, because they all use the same standard interface. http://www.cs.cmu.edu/~iwan/1394/
When I read the technical reports for the 2005 DARPA Grand Challenge almost every report showed pictures of vehicles equiped with stereo pairs of cameras, but at the race just about all of them had been removed, presumably because of basic issues such as camera synchronization.
I've been experimenting with stereo vision for a long time, and I don't know of any way to synchronise two cameras using software (I don't think it's possible). The best you can get away with is either have the robot stop before it takes pictures, or have the robot travel at a sufficiently low speed so that timing error are not large. I'm gambling that the companies which manufacture webcams will eventually begin producing dual units and solve this problem for me.
Of course you can buy synchronised and pre-calibrated stereo cameras off the shelf, but these are always very pricey items typically intended for industrial vision tasks.
I think the reason why stereo was not used more extensively in the Grand Challenge is firstly because stereo vision involves a lot of uncertainty which ultimately arises from limited pixel resolution or ambiguous correspondences, and secondly for instantaneous stereo pairs the effective range is quite limited. The way to deal with this, and get a much longer effective range is either to track features over time, or to use an occupancy grid based SLAM method, such as DP-SLAM. Both these approaches remain somewhat experimental at present.
In contrast using laser rangers is much easier from a programming point of view, and the uncertainties are far smaller, usually being considered to be a single point in space. Ultimately though I think multi-camera based ranging will trump lasers since they will provide a very cheap solid state solution capable of grabbing far more data at each time step than a laser can.
It looks like the closes contender for an inexpensive firewire camera that you can externally trigger for synchronization is the Point Grey Firefly MV. When Point Grey says inexpensive, I tend to be skeptical, but they were so cheap that they were giving them away free at their boot at the Machine Vision Show in Boston. They are supposed to cost less than $200.
Yes, the SICK sensors were the obvious choice for so many time-strapped DARPA teams because they had a serial interface and would generate fairly reliable range values, although they were good for only about 60 feet. (The high school team did not bother starting to work on their stereo until 2 weeks before the qualifying competition. They said to their technical advisor: "OK, we have 2 cameras. Now what do we have to do?")
2012 Top 10 Robot Christmas Gift Ideas
DARPA Robotics Challenge Kick Off
2012 ASABE Robot Contest Photos
Interview with David L. Heiserman
David Anderson on Subsumption Robots
Review: Apocalyptic AI by Robert M. Geraci
Raspberry Pi Interview with Eben Upton
2012 VEX Robotics World Championship
Giant Dallas Robot Cited as Best Public Art
There's More Than One Way to Skin a Robot
Day of the Androids at Hanson Robotics
Apocalyptic AI by Robert M. Geraci
Robotics Programming 101
Pololu 3pi: the 10,000 Mile Review
Unofficial LEGO Mindstorms NXT Guide
Machinima Review: Stolen Life
i-ROBOT Poetry by Jason Christie
The Definitive Guide to Building Java Robots
Microbric Viper Kit
Introduction to Autonomous Mobile Robots