Книга: The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World

Chapter Nine

Chapter Nine

Model Ensembles: Foundations and Algorithms,* by Zhi-Hua Zhou (Chapman and Hall, 2012), is an introduction to metalearning. The original paper on stacking is “Stacked generalization,”* by David Wolpert (Neural Networks, 1992). Leo Breiman introduced bagging in “Bagging predictors”* (Machine Learning, 1996) and random forests in “Random forests”* (Machine Learning, 2001). Boosting is described in “Experiments with a new boosting algorithm,” by Yoav Freund and Rob Schapire (Proceedings of the Thirteenth International Conference on Machine Learning, 1996).

“I, Algorithm,” by Anil Ananthaswamy (New Scientist, 2011), chronicles the road to combining logic and probability in AI. Markov Logic: An Interface Layer for Artificial Intelligence,* which I cowrote with Daniel Lowd (Morgan & Claypool, 2009), is an introduction to Markov logic networks. The Alchemy website, http://alchemy.cs.washington.edu, also includes tutorials, videos, MLNs, data sets, publications, pointers to other systems, and so on. An MLN for robot mapping is described in “Hybrid Markov logic networks,”* by Jue Wang and Pedro Domingos (Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, 2008). Thomas Dietterich and Xinlong Bao describe the use of MLNs in DARPA’s PAL project in “Integrating multiple learning components through Markov logic”* (Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, 2008). “Extracting semantic networks from text via relational clustering,”* by Stanley Kok and Pedro Domingos (Proceedings of the Nineteenth European Conference on Machine Learning, 2008), describes how we used MLNs to learn a semantic network from the Web.

Efficient MLNs with hierarchical class and part structure are described in “Learning and inference in tractable probabilistic knowledge bases,”* by Mathias Niepert and Pedro Domingos (Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, 2015). Google’s approach to parallel gradient descent is described in “Large-scale distributed deep networks,”* by Jeff Dean et al. (Advances in Neural Information Processing Systems 25, 2012). “A general framework for mining massive data streams,”* by Pedro Domingos and Geoff Hulten (Journal of Computational and Graphical Statistics, 2003), summarizes our sampling-based method for learning from open-ended data streams. The FuturICT project is the subject of “The machine that would predict the future,” by David Weinberger (Scientific American, 2011).

“Cancer: The march on malignancy” (Nature supplement, 2014) surveys the current state of the war on cancer. “Using patient data for personalized cancer treatments,” by Chris Edwards (Communications of the ACM, 2014), describes the early stages of what could grow into CanceRx. “Simulating a living cell,” by Markus Covert (Scientific American, 2014), explains how his group built a computer model of a whole infectious bacterium. “Breakthrough Technologies 2015: Internet of DNA,” by Antonio Regalado (MIT Technology Review, 2015), reports on the work of the Global Alliance for Genomics and Health. Cancer Commons is described in “Cancer: A Computational Disease that AI Can Cure,” by Jay Tenenbaum and Jeff Shrager (AI Magazine, 2011).

Оглавление книги


Генерация: 1.521. Запросов К БД/Cache: 3 / 0
поделиться
Вверх Вниз