We are thrilled to welcome Joshua Tenenbaum, Marc Mézard and Melanie Mitchell as keynote speakers.
Reverse-engineering core common sense with the tools of probabilistic programs, game-style simulation engines, and inductive program synthesis
Abstract
None of today’s AI systems or approaches comes anywhere close to capturing the common sense of a toddler, or even a 3-month old infant. I will talk about some of the challenges facing conventional machine learning paradigms, such as end-to-end unsupervised learning in deep networks and deep reinforcement learning, and discuss some initial, small steps we have taken with an alternative cognitively-inspired AI approach. This requires us to develop a different engineering toolset, based on probabilistic programs, game-style simulation programs as general-purpose startup software (or “the game engine in the head”), and learning as programming (or “the child as hacker”).
Speaker Bio
Joshua Tenenbaum is the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences, and a member of the Computer Science and Artificial Intelligence Laboratory at MIT. He received his Ph.D. from MIT in 1999 and after a brief postdoc with the MIT AI Lab, he joined the Stanford University faculty as Assistant Professor of Psychology and (by courtesy) Computer Science. He returned to MIT as a faculty member in 2002. He currently serves as Associate Editor of the journal Cognitive Science, and he has been active on the program committees of the Neural Information Processing Systems (NIPS) and Cognitive Science (CogSci) conferences. In 2019, he was named a MacArthur Fellow.
Statistical Physics and Statistical Inference
Abstract
A major challenge of contemporary statistical inference is the large-scale limit, where one wants to discover the values of many hidden parameters, using large amount of data. In recent years, ideas from statistical physics of disordered systems have helped to develop new algorithms for important inference problems, ranging from community detection to compressed sensing, machine learning (notably neural networks), tomography and generalized linear regression. The talk will review these developments and explain how they can be used, to develop new types of algorithms and identify phase transitions.
Speaker Bio
Marc Mézard is a theoretical physicist. He received a PhD from Ecole normale supérieure in Paris, did a post-doc in Rome, and became the head of the statistical physics group in Paris-Sud University. Since 2012 he is the director of Ecole normale supérieure. His main field of research is statistical physics and its use in various branches of science – biology, economics and finance, information theory, computer science, statistics, signal processing. In recent years his research has focused on information processing in neural networks. He has received the Lars Onsager prize from the American Physical Society, the Humboldt-Gay-Lussac prize, the silver medal of CNRS and the Ampere prize of the French Academy of Science. He is a member of the European Academy of Science.
Why AI is Harder Than We Think
Abstract
Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI Spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI Winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this talk I will discuss some fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I will also speculate on what is needed for the grand challenge of making AI systems more robust, general, and adaptable—in short, more intelligent.
Speaker Bio
Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute, and Professor of Computer Science (currently on leave) at Portland State University. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).