Книга: The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World

War is not for humans

War is not for humans

Soldiering is harder to automate than science, but it will be as well. One of the prime uses of robots is to do things that are too dangerous for humans, and fighting wars is about as dangerous as it gets. Robots already defuse bombs, and drones allow a platoon to see over the hill. Self-driving supply trucks and robotic mules are on the way. Soon we will need to decide whether robots are allowed to pull the trigger on their own. The argument for doing this is that we want to get humans out of harm’s way, and remote control is not viable in fast-moving, shoot-or-be-shot situations. The argument against is that robots don’t understand ethics, and so can’t be entrusted with life-or-death decisions. But we can teach them. The deeper question is whether we’re ready to.

It’s not hard to state general principles like military necessity, proportionality, and sparing civilians. But there’s a gulf between them and concrete actions, which the soldier’s judgment has to bridge. Asimov’s three laws of robotics quickly run into trouble when robots try to apply them in practice, as his stories memorably illustrate. General principles are usually contradictory, if not self-contradictory, and they have to be lest they turn all shades of gray into black and white. When does military necessity outweigh sparing civilians? There is no universal answer and no way to program a computer with all the eventualities. Machine learning, however, provides an alternative. First, teach the robot to recognize the relevant concepts, for example with data sets of situations where civilians were and were not spared, armed response was and was not proportional, and so on. Then give it a code of conduct in the form of rules involving these concepts. Finally, let the robot learn how to apply the code by observing humans: the soldier opened fire in this case but not in that case. By generalizing from these examples, the robot can learn an end-to-end model of ethical decision making, in the form of, say, a large MLN. Once the robot’s decisions agree with a human’s as often as one human agrees with another, the training is complete, meaning the model is ready for download into thousands of robot brains. Unlike humans, robots don’t lose their heads in the heat of combat. If a robot malfunctions, the manufacturer is responsible. If it makes a wrong call, its teachers are.

The main problem with this scenario, as you may have already guessed, is that letting robots learn ethics by observing humans may not be such a good idea. The robot is liable to get seriously confused when it sees that humans’ actions often violate their ethical principles. We can clean up the training data by including only the examples where, say, a panel of ethicists agrees that the soldier made the right decision, and the panelists can also inspect and tweak the model post-learning to their satisfaction. Agreement may be hard to reach, however, particularly if the panel includes all the different kinds of people it should. Teaching ethics to robots, with their logical minds and lack of baggage, will force us to examine our assumptions and sort out our contradictions. In this, as in many other areas, the greatest benefit of machine learning may ultimately be not what the machines learn but what we learn by teaching them.

Another objection to robot armies is that they make war too easy. But if we unilaterally relinquish them, that could cost us the next war. The logical response, advocated by the United Nations and Human Rights Watch, is a treaty banning robot warfare, similar to the Geneva Protocol of 1925 banning chemical and biological warfare. This misses a crucial distinction, however. Chemical and biological warfare can only increase human suffering, but robot warfare can greatly decrease it. If a war is fought by machines, with humans only in command positions, no one is killed or wounded. Perhaps, then, what we should do, instead of outlawing robot soldiers, is-when we’re ready-outlaw human soldiers.

Robot armies may indeed make wars more likely, but they will also change the ethics of war. Shoot/don’t shoot dilemmas become much easier if the targets are other robots. The modern view of war as an unspeakable horror, to be engaged in only as a last resort, will give way to a more nuanced view of war as an orgy of destruction that leaves all sides impoverished and is best avoided but not at all costs. And if war is reduced to a competition to see who can destroy the most, then why not compete instead to create the most?

In any case, banning robot warfare may not be viable. Far from banning drones-the precursors of tomorrow’s warbots-countries large and small are busy developing them, presumably because in their estimation the benefits outweigh the risks. As with any weapon, it’s safer to have robots than to trust the other side not to. If in future wars millions of kamikaze drones will destroy conventional armies in minutes, they’d better be our drones. If World War III will be over in seconds, as one side takes control of the other’s systems, we’d better have the smarter, faster, more resilient network. (Off-grid systems are not the answer: systems that aren’t networked can’t be hacked, but they can’t compete with networked systems, either.) And, on balance, a robot arms race may be a good thing, if it hastens the day when the Fifth Geneva Convention bans humans in combat. War will always be with us, but the casualties of war need not be.

Оглавление книги


Генерация: 1.388. Запросов К БД/Cache: 3 / 1
поделиться
Вверх Вниз