Well, this is an interesting article:

As militaries develop autonomous robotic warriors to replace humans on the battlefield, new ethical questions emerge. If a robot in combat has a hardware malfunction or programming glitch that causes it to kill civilians, do we blame the robot, or the humans who created and deployed it?

Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions. But psychologists are finding that people don’t have such a clear-cut view of humanoid robots.

The author goes on to discuss a recent study that found many humans — regardless of whether they think machines have free will — do blame robots in certain circumstances. Of course, this doesn’t mean robots ought to be blamed for their mistakes. It simply means some humans think they should.

Which raises a deeper and more important question: who — if anyone — should be held accountable for the robot’s mistake? Because you can’t seriously argue that robots should be put on trial or throw in jail.

Or can you?