Hooboy... Lots of robotics related updates since the last diary entry:
I managed to fix a logic bug in the Phase IV (tourguide) button handling code that hadn't been debouncing the buttons properly during tours. It was a problem because kids like to press the buttons on Zaza's acrylic hood (repeatedly), which caused a bunch of voice cues listing the intended exhibit destinations (or information on the current exhibit if the robot had arrived at an exhibit) to stack up, sounding a bit like a broken record :P
I also fixed a problem with the POE face client (interfaces remote Java face applets with the voice server) that would occasionally cause remote face applets to 'loose sync' and stop receiving new voice cues from the voiceServer.
During Phase IV testing over the last two weekends I noticed a problem with the voiceServer not rendering some voice cues that list the exhibit destinations during a tour. It looks like a race condition exists when there are more than one active (unvisited) goal that causes the cues to be sent to the voiceServer faster than it can handle them, causing the first of the cues to be dropped. This has compelled me to consider rewriting the voiceServer to use POE to manage the threading issues. I had a quick glance at David Huggins-Daines' POE::Component::Festival and it looks like it could be used in place of direct access to Festival::Client::Async for POE, but both the Perl module, and the 'synth-poe' example needed to be tweaked before they would work.
I stumbled across some new voices for Festival from CMU last week. The first (sample) uses 'cluster unit selection' which basically assembles a synthesized waveform from components of a database of pre-recorded, tagged speech. The results can very from fantastic to horrible, and takes far too long to render on existing hardware, eliminating the possibility of real-time synthesis. It's unlikely we will be using this one with Zaza. The second (sample) uses the same ARCTIC database, but is converted to diphones through HMM analysis. It has less inflection than the current voice (sample), but seems to be quite a bit more intelligible on Zaza's current audio hardware. Due to some differences in text analysis, using the slt_hts voice requires making some changes to both the face applet, and the voiceServer. I have already updated the face applet, and plan to do the same with the voiceServer this Friday.
I also started work on the CARMEN interface, but need to make some API decisions before progressing any further. I am planning to duplicate the critical interfaces used with the open-sourced version of the Nomadics Scout hardware interface library. This should make it easier to integrate with other robotics toolkits like Player/Stage.
After a great deal of waiting, Thomas Baier finally released a version of 3DWin that worked with a current version of Moray, allowing me to export my model of the Blue Cube that I made a few years ago to VRML. I also put up the hamaPatch model of one of the Blue Cube's 'quarter panels'. At some point I'll rebuild the model using this more accurate model with BlenderCAD.
I finally decided on i2c as the sensor bus, allowing relatively simple interfaces to sensor nodes like the SRF08. Since the Blue Cube's mainboard doesn't have a built-in i2c interface like the VIA EPIA M motherboards do, I needed to find another sufficiently fast means of interfacing with the i2c network. I had a look at dafyddwalters' OAP parallel port i2c interface, but the lack of proper isolation on the SCL line, and the nonlinearities of using transistors for switching concerned me. I found an old article by Simon G. Vogl that used a slightly better design and made a few modifications. The latest block diagram shows all of the current/planned systems on the robot.
I have an initial design for the power distribution board, but need to do some testing before building it. The circuit allows peripheral power to be controlled by the power management uC, as well as automatically switching to battery power when the external +24V dock supply is disconnected.