Книга: The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
Chapter Four
Chapter Four
Sebastian Seung’s Connectome (Houghton Mifflin Harcourt, 2012) is an accessible introduction to neuroscience, connectomics, and the daunting challenge of reverse engineering the brain. Parallel Distributed Processing,* edited by David Rumelhart, James McClelland, and the PDP research group (MIT Press, 1986), is the bible of connectionism in its 1980s heyday. Neurocomputing,* edited by James Anderson and Edward Rosenfeld (MIT Press, 1988), collates many of the classic connectionist papers, including: McCulloch and Pitts on the first models of neurons; Hebb on Hebb’s rule; Rosenblatt on perceptrons; Hopfield on Hopfield networks; Ackley, Hinton, and Sejnowski on Boltzmann machines; Sejnowski and Rosenberg on NETtalk; and Rumelhart, Hinton, and Williams on backpropagation. “Efficient backprop,”* by Yann LeCun, L?on Bottou, Genevieve Orr, and Klaus-Robert M?ller, in Neural Networks: Tricks of the Trade, edited by Genevieve Orr and Klaus-Robert M?ller (Springer, 1998), explains some of the main tricks needed to make backprop work.
Neural Networks in Finance and Investing,* edited by Robert Trippi and Efraim Turban (McGraw-Hill, 1992), is a collection of articles on financial applications of neural networks. “Life in the fast lane: The evolution of an adaptive vehicle control system,” by Todd Jochem and Dean Pomerleau (AI Magazine, 1996), describes the ALVINN self-driving car project. Paul Werbos’s PhD thesis is Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences* (Harvard University, 1974). Arthur Bryson and Yu-Chi Ho describe their early version of backprop in Applied Optimal Control* (Blaisdell, 1969).
Learning Deep Architectures for AI,* by Yoshua Bengio (Now, 2009), is a brief introduction to deep learning. The problem of error signal diffusion in backprop is described in “Learning long-term dependencies with gradient descent is difficult,”* by Yoshua Bengio, Patrice Simard, and Paolo Frasconi (IEEE Transactions on Neural Networks, 1994). “How many computers to identify a cat? 16,000,” by John Markoff (New York Times, 2012), reports on the Google Brain project and its results. Convolutional neural networks, the current deep learning champion, are described in “Gradient-based learning applied to document recognition,”* by Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner (Proceedings of the IEEE, 1998). “The $1.3B quest to build a supercomputer replica of a human brain,” by Jonathon Keats (Wired, 2013), describes the European Union’s brain modeling project. “The NIH BRAIN Initiative,” by Thomas Insel, Story Landis, and Francis Collins (Science, 2013), describes the BRAIN initiative.
Steven Pinker summarizes the symbolists’ criticisms of connectionist models in Chapter 2 of How the Mind Works (Norton, 1997). Seymour Papert gives his take on the debate in “One AI or Many?” (Daedalus, 1988). The Birth of the Mind, by Gary Marcus (Basic Books, 2004), explains how evolution could give rise to the human brain’s complex abilities.
- Chapter 2. Four Puzzles From Cyberspace
- Chapter 9. Translation
- Chapter 15. Competition Among Sovereigns
- CHAPTER FOUR: How Does Your Brain Learn?
- Chapter 5. Preparations
- Chapter 6. Traversing of tables and chains
- Chapter 7. The state machine
- Chapter 8. Saving and restoring large rule-sets
- Chapter 9. How a rule is built
- Chapter 10. Iptables matches
- Chapter 11. Iptables targets and jumps
- Chapter 12. Debugging your scripts