Klaatu barada nikto
-The Day the Earth Stood Still, Dir. Robert Wise (1951)
Science fiction is rife with stories of robots threatening humanity such as: Terminator, The Matrix, Robocop, etc. In fact as we see drones striking America’s enemies across the world, the DARPA robotics challenge, and Google purchasing a military robot company, it’s hard not to imagine that Skynet isn’t that far from our future. Oddly the pentagon is trying to prevent that with various lines of research aimed at programing AI to follow the Geneva Convention and to develop robots with moral reasoning so that they can tell right from wrong.
There is certainly a huge downside, and I mean huge, for allowing autonomous robots the means and authority to kill people. The big criticism is that governments will improperly order there autonomous killing machine to go and kill enemies of the state. I suppose the fear is that robots will not question their orders, unlike humans. However, history shows us there are far more humans willing to follow orders than question them.
The real problem is not that soldiers follow orders, it is that governments give those orders. Governments constantly abuse their power of life and death over their subjects and/or people in other countries. Currently someone decides person X is an enemy of the state and orders them killed. That now involves sending real people to shoot person X, or maybe real people piloting a drone from thousands of miles away to kill person X. In either case soldiers are executing the orders given by some government official. If the future of autonomous killing machines comes about, the process will be no different. Person X is declared an enemy of that state by someone in the government, they or their subordinates give orders to kill that person to the machine and it goes out and does it. There is no moral difference between person X being killed by a soldier or an autonomous machine. The order to kill that person originates from a supposedly moral human.
The upside to having moral, Geneva Convention following, autonomous robots is two-fold. First if the target is person X and a soldier sees and shoots at the target, once the weapon is discharged there is no turning back, even if it is a mistake. The robot may have many more opportunities to realize the target is not really the person it is supposed to kill, and thus abort. Also the autonomous robots may have more opportunity to reduce innocent bystander damage. When humans are involved, anger, fatigue, tunnel vision (on the mission) can lead to poor decisions which involve more innocent people being harmed, than if a non-tiring, unemotional robot was deciding how and when to strike it’s target.
But the biggest upside of a moral robot would be in accepting the orders in the first place. It might be much easier for the robot to question the legality of the action. If it is programmed to abide by the Geneva Convention, depending on the sophistication of the AI it might easily discern illegal orders and not follow them. It could even be programmed to report people issuing it illegal orders. I can imagine a world where the robots are self aware and moral/ethical. In that world, the human-robot war is fought between humans who want to kill other humans contrary to morality and the robots who want to stop them.
The big problem is no government really wants ethical robots. Ethical robot soldiers would follow international law and obey treaty obligations, something that human run governments don’t do unless it suits them. In fact truly moral robots would not let themselves engage in much of the combat that occurs in our modern world. This is not something any government really wants (they want mindless obedient order following killers) so it is unlikely that those robots will be developed. However, since governments want to be seen as moral and law abiding we can still hope that the autonomous killing machines of tomorrow really will be programmed to be moral like the government says it is. If that happens we may finally get some real peace on earth.