Andrew Blake is a Microsoft Distinguished Scientist and the Laboratory Director of Microsoft Research Cambridge, England. He joined Microsoft in 1999 as a Senior Researcher to found the Computer Vision group. In 2008 he became a Deputy Managing Director at the lab, before assuming his current position in 2010. Prior to joining Microsoft Andrew trained in mathematics and electrical engineering in Cambridge England, and studied for a doctorate in Artificial Intelligence in Edinburgh. He was an academic for 18 years, latterly on the faculty at Oxford University, where he was a pioneer in the development of the theory and algorithms that can make it possible for computers to behave as seeing machines.
He has published several books including "Visual Reconstruction" with A.Zisserman (MIT press), "Active Vision" with A. Yuille (MIT Press) and "Active Contours" with M. Isard (Springer-Verlag). He has twice won the prize of the European Conference on Computer Vision, with R. Cipolla in 1992 and with M. Isard in 1996, and was awarded the IEEE David Marr Prize (jointly with K. Toyama) in 2001.
In 2006 the Royal Academy of Engineering awarded him its Silver Medal and in 2007 the Institution of Engineering and Technology presented him with the Mountbatten Medal (previously awarded to computer pioneers Maurice Wilkes and Tim Berners-Lee, amongst others). He was elected Fellow of the in 1998, Fellow of the IEEE in 2008, and Fellow of the Royal Society in 2005. In 2010, Andrew was elected to the council of the Royal Society. In 2011, he and colleagues at Microsoft Research received the Royal Academy of Engineering MacRobert Award for their machine learning contribution to Microsoft Kinect human motion-capture. In 2012 Andrew was elected to the board of the EPSRC and also received an honorary degree of Doctor of Science from the University of Edinburgh. In 2013 Andrew was awarded an honorary degree of Doctor of Engineering from the University of Sheffield. In 2014, Andrew gave the prestigious Gibbs lecture at the Joint Mathematics Meetings.
Analysis by Synthesis versus Learned Detection for Vision
Machine vision works nowadays. Machines can: navigate using vision; separate object from background; recognise a wide variety of objects, and often track their motion. These abilities are great spin-offs in their own right, but are also part of an extended adventure in understanding the nature of intelligence through vision.
One question is whether intelligent systems will turn out to depend more on generative models, or on networks trained on data at ever greater scale? In vision systems this boils down to the roles of two paradigms: analysis-by-synthesis versus empirical recognisers. Each approach has its strengths, and empirical recognisers especially have made great strides in performance in the last few years, through deep learning. One can speculate about how deeply the two approaches may eventually be integrated, and on the progress that has already been made with such integration.Top
Niloy J. Mitra leads the Smart Geometry Processing group in the Department of Computer Science at University College London (UCL). Niloy received his PhD degree from Stanford University under the guidance of Prof. Leonidas Guibas. His research interests include shape understanding, computational design, geometric processing, and more generally in computer graphics. Niloy received the ACM Siggraph Significant New Researcher Award in 2013 and the BCS Roger Needham Award in 2015.
Computational Design of Functional Objects
Both designers and novice users like to design functional objects for physical use. However, there exists limited computational support to facilitate this process. Existing tools either require specialized skills and extensive training, or force the users to perform extensive trial and error based exploration with limited guidance. In this talk we will discuss computational tools that support functional prototyping, guided designing, and material-aware modeling.Top
Max Welling is a Professor of Computer Science at the University of Amsterdam and the University of California Irvine. In the past he held postdoctoral positions at Caltech ('98-'00), UCL ('00-'01) and the U. Toronto ('01-'03). He received his PhD in '98 under supervision of Nobel laureate Prof. G. 't Hooft.
Max Welling served as associate editor in chief of IEEE TPAMI from 2011-2015. He serves on the editorial boards of JMLR and JML and was an associate editor for Neurocomputing, JCGS and TPAMI. In 2009 he was conference chair for AISTATS, in 2013 he was be program chair for NIPS, in 2014 he was the general chair for NIPS and in 2016 he will be a program chair at ECCV. He received multiple grants from NSF, NIH, ONR, NWO, Facebook, Yahoo and Google, among which an NSF career grant in 2005. He is recipient of the ECCV Koenderink Prize in 2010 and the best paper award at ICML 2012.
Welling is currently the director of the master program in artificial intelligence at the UvA and he is in the scientific board of the newly opened Data Science Research Center in Amsterdam. He is also an associate fellow of the Neural Computation and Adaptive Perception Program at the Canadian Institute for Advanced Research. Welling’s research focuses on large-scale statistical learning. He has made contributions in Bayesian learning, approximate inference in graphical models and visual object recognition. He has over 150 academic publications.
Learning to generate
The recent amazing success of deep learning has been mainly in discriminative learning, that is, classification and regression. An important factor for this success has been, besides Moore's law, the availability of large labeled datasets. However, it is not clear whether in the future the amount of available labels grows as fast as the amount of unlabeled data, providing one argument to be interested in unsupervised and semi-supervised learning.
Besides this there are a number of other reasons why unsupervised learning is still important, such as the fact that data in the life sciences often has many more features than instances (p>>n), the fact that probabilities over feature space are useful for planning and control problems and the fact that complex simulator models are the norm in the sciences. In this talk I will discuss deep generative models that can be jointly trained with discriminative models and that facilitate semi-supervised learning. I will discuss recent progress in learning and Bayesian inference in these "variational auto-encoders". I will then extend the deep generative models to the class of simulators for which no tractable likelihood exists and discuss new Bayesian inference procedures to fit these models to data.
The slides of Max Welling's Keynote talk (152 MB, pptx format) are available for download.Top