Two recent stories show that we've still got a ways to go in making human robot interaction safe and effective. In an article titled, "Erratic fleshies sabotage, wreck innocent flying robot", the Register reports on new air safety recommendations for the use of autonomous and remotely-piloted flying robots in US air space. The recommendations stem from last year's crash of a Predator B robot, caused by a sequence of events that followed a software lock up on the remote control console. While the robot was destroyed, no humans were hurt in the Predator crash. The outcome was not so lucky in a more recent robotics-related incident in which an Oerlikon GDF-005 robotic anti-aircraft cannon "malfunctioned", killing 9 people and wounding 14. It's believed the incident was caused by either a mechanical or software problem. Interestingly, this weapon is a simple automated machine operated under the control of a human and not a fully autonomous weapon like the Samsung Techwin SGR-A1, which autonomously makes its own decisions about who to kill. So at least the ethical and legal ramifications of the accident should be no different than those of a conventional industrial or military accident.