A new Anne Corwin essay posted at the Institute for Ethics and Emerging Technologies does an excellent job of addressing what autonomous means. Is a simple C program autonomous? If not, who really made its decisions, the programmer, the boss, the customer? Are the robots common today autonomous? Not in the sense that a person, a cat, or a mouse is autonomous. They're more like tools, something an autonomous machine could never be. Are autonomous robots a desirable (or possible) thing build? Why are people so spooked by the idea of autonomous machines? Why the rush to sort out the legal issues of robot rights before we have fully autonomous robots to enjoy those rights? She notes that the world is already populated with loads of autonomous animals - yet humans have spent millennia trying to reduce their autonomy. So why is it we want to increase the autonomy of machines? If we do create a fully autonomous robot, is it even ethical to expect to keep it in a lab and experiment on it?