I just finished up some work on using RoboRealm to guide my robot as it reaches toward a target object. The ultimate goal is for the robot to be able to pick up the object from a random location or take it from someone's hands. For now, I simply wanted to work out the coordinate transformations from visual space to arm space to get the two hands to point in the right direction as the target is moved about. The following video shows the results so far:
I don't have a full write-up yet on how I did this but it basically just uses 3-d coordinate transformations from the head angles and distance to the target (as measured by sonar and IR sensors mounted near the camera lens) to a frame of reference attached to each shoulder joint. The Dynamixel AX-12 servos are nice for this application since they can be queried for their current position info. The distance to the balloon as measured by the sonar and IR sensors is a little hit and miss and I think I'd get better performance using stereo vision instead.