Книга: The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World

Google + Master Algorithm = Skynet?

Google + Master Algorithm = Skynet?

Of course, robot armies also raise a whole different specter. According to Hollywood, the future of humanity is to be snuffed out by a gargantuan AI and its vast army of machine minions. (Unless, of course, a plucky hero saves the day in the last five minutes of the movie.) Google already has the gargantuan hardware such an AI would need, and it’s recently acquired an army of robotics startups to go with it. If we drop the Master Algorithm into its servers, is it game over for humanity? Why yes, of course. It’s time to reveal my true agenda, with apologies to Tolkien:

Three Algorithms for the Scientists under the sky,

Seven for the Engineers in their halls of servers,

Nine for Mortal Businesses doomed to die,

One for the Dark AI on its dark throne,

In the Land of Learning where the Data lies.

One Algorithm to rule them all, One Algorithm to find them,

One Algorithm to bring them all and in the darkness bind them,

In the Land of Learning where the Data lies.

Hahahaha! Seriously, though, should we worry that machines will take over? The signs seem ominous. With every passing year, computers don’t just do more of the world’s work; they make more of the decisions. Who gets credit, who buys what, who gets what job and what raise, which stocks will go up and down, how much insurance costs, where police officers patrol and therefore who gets arrested, how long their prison terms will be, who dates whom and therefore who will be born: machine-learned models already play a part in all of these. The point where we could turn off all our computers without causing the collapse of modern civilization has long passed. Machine learning is the last straw: if computers can start programming themselves, all hope of controlling them is surely lost. Distinguished scientists like Stephen Hawking have called for urgent research on this issue before it’s too late.

Relax. The chances that an AI equipped with the Master Algorithm will take over the world are zero. The reason is simple: unlike humans, computers don’t have a will of their own. They’re products of engineering, not evolution. Even an infinitely powerful computer would still be only an extension of our will and nothing to fear. Recall the three components of every learning algorithm: representation, evaluation, and optimization. The learner’s representation circumscribes what it can learn. Let’s make it a very powerful one, like Markov logic, so the learner can in principle learn anything. The optimizer then does everything in its power to maximize the evaluation function-no more and no less-and the evaluation function is determined by us. A more powerful computer will just optimize it better. There’s no risk of it getting out of control, even if it’s a genetic algorithm. A learned system that didn’t do what we want would be severely unfit and soon die out. In fact, it’s the systems that have even a slight edge in serving us better that will, generation after generation, multiply and take over the gene pool. Of course, if we’re so foolish as to deliberately program a computer to put itself above us, then maybe we’ll get what we deserve.

The same reasoning applies to all AI systems because they all-explicitly or implicitly-have the same three components. They can vary what they do, even come up with surprising plans, but only in service of the goals we set them. A robot whose programmed goal is “make a good dinner” may decide to cook a steak, a bouillabaisse, or even a delicious new dish of its own creation, but it can’t decide to murder its owner any more than a car can decide to fly away. The purpose of AI systems is to solve NP-complete problems, which, as you may recall from Chapter 2, may take exponential time, but the solutions can always be checked efficiently. We should therefore welcome with open arms computers that are vastly more powerful than our brains, safe in the knowledge that our job is exponentially easier than theirs. They have to solve the problems; we just have to check that they did so to our satisfaction. AIs will think fast what we think slow, and the world will be the better for it. I, for one, welcome our new robot underlings.

It’s natural to worry about intelligent machines taking over because the only intelligent entities we know are humans and other animals, and they definitely have a will of their own. But there is no necessary connection between intelligence and autonomous will; or rather, intelligence and will may not inhabit the same body, provided there is a line of control between them. In The Extended Phenotype, Richard Dawkins shows how nature is replete with examples of an animal’s genes controlling more than its own body, from cuckoo eggs to beaver dams. Technology is the extended phenotype of man. This means we can continue to control it even if it becomes far more complex than we can understand.

Picture two strands of DNA going for a swim in their private pool, aka a bacterium’s cytoplasm, two billion years ago. They’re pondering a momentous decision. “I’m worried, Diana,” says one. “If we start making multicellular creatures, will they take over?” Fast-forward to the twenty-first century, and DNA is still alive and well. Better than ever, in fact, with an increasing fraction living safely in bipedal organisms comprising trillions of cells. It’s been quite a ride for our tiny double-stranded friends since they made their momentous decision. Humans are their trickiest creation yet; we’ve invented things like contraception that let us have fun without spreading our DNA, and we have-or seem to have-free will. But it’s still DNA that shapes our notions of fun, and we use our free will to pursue pleasure and avoid pain, which, for the most part, still coincides with what’s best for our DNA’s survival. We may yet be DNA’s demise if we choose to transmute ourselves into silicon, but even then, it’s been a great two billion years. The decision we face today is similar: if we start making AIs-vast, interconnected, superhuman, unfathomable AIs-will they take over? Not any more than multicellular organisms took over from genes, vast and unfathomable as we may be to them. AIs are our survival machines, in the same way that we are our genes’.

This does not mean that there is nothing to worry about, however. The first big worry, as with any technology, is that AI could fall into the wrong hands. If a criminal or prankster programs an AI to take over the world, we’d better have an AI police capable of catching it and erasing it before it gets too far. The best insurance policy against vast AIs gone amok is vaster AIs keeping the peace.

The second worry is that humans will voluntarily surrender control. It starts with robot rights, which seem absurd to me but not to everyone. After all, we already give rights to animals, who never asked for them. Robot rights might seem like the logical next step in expanding the “circle of empathy.” Feeling empathy for robots is not hard, particularly if they’re designed to elicit it. Even Tamagotchi, Japanese “virtual pets” with all of three buttons and an LCD screen, do it quite successfully. The first humanoid consumer robot will set off a race to make more and more empathy-eliciting robots, because they’ll sell much better than the plain metal variety. Children raised by robot nannies will have a lifelong soft spot for kindly electronic friends. The “uncanny valley”-our discomfort with robots that are almost human but not quite-will be unknown to them because they grew up with robot mannerisms and maybe even adopted them as cool teenagers.

The next step in the insidious progression of AI control is letting them make all the decisions because they’re, well, so much smarter. Beware. They may be smarter, but they’re in the service of whoever designed their score functions. This is the “Wizard of Oz” problem. Your job in a world of intelligent machines is to keep making sure they do what you want, both at the input (setting the goals) and at the output (checking that you got what you asked for). If you don’t, somebody else will. Machines can help us figure out collectively what we want, but if you don’t participate, you lose out-just like democracy, only more so. Contrary to what we like to believe today, humans quite easily fall into obeying others, and any sufficiently advanced AI is indistinguishable from God. People won’t necessarily mind taking their marching orders from some vast oracular computer; the question is who oversees the overseer. Is AI the road to a more perfect democracy or to a more insidious dictatorship? The eternal vigil has just begun.

The third and perhaps biggest worry is that, like the proverbial genie, the machines will give us what we ask for instead of what we want. This is not a hypothetical scenario; learning algorithms do it all the time. We train a neural network to recognize horses, but it learns instead to recognize brown patches, because all the horses in its training set happened to be brown. You just bought a watch, so Amazon recommends similar items: other watches, which are now the last thing you want to buy. If you examine all the decisions that computers make today-who gets credit, for example-you’ll find that they’re often needlessly bad. Yours would be too, if your brain was a support vector machine and all your knowledge of credit scoring came from perusing one lousy database. People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.

Оглавление книги


Генерация: 1.227. Запросов К БД/Cache: 3 / 1
поделиться
Вверх Вниз