More about HKUST
Understanding Deep Learning via Scalable Nonparametric Methods
Speaker: Dr. Le Song College of Computing Georgia Institute of Technology Title: "Understanding Deep Learning via Scalable Nonparametric Methods" Date: Tuesday, 5 January 2016 Time: 11:00am - 12 noon Venue: Lecture Theater H (near lifts 27 & 28), HKUST Abstract: The complexity and scale of big data impose tremendous challenges for their analysis. Yet, big data also offer us great opportunities. Some nonlinear phenomena or relations, which are not clear or cannot be inferred reliably from small and medium data, now become clear and can be learned robustly from big data. Typically, the form of the nonlinearity is unknown to us, and needs to be learned from data as well. Being able to harness the nonlinear structures from big data could allow us to tackle problems which are impossible before or obtain results which are far better than previous state-of-the-arts. Nowadays, deep neural networks are the methods of choice when it comes to large scale nonlinear learning problems. What makes deep neural networks work? Is there any general principle for tackling high dimensional nonlinear problems which we can learn from deep neural works? Can we design competitive or better alternatives based on such knowledge? To make progress in these questions, we designed new nonparametric methods which are scalable in terms of memory, computation and dimensions. These methods allow us to do large scale "lesion-and-replace" experiments on existing deep learning architectures, and investigate four crucial aspects on the usefulness of the fully connected layers, the advantage of the feature learning process, the limitation of the gradient descent updates, and the importance of the compositional structures. Our results also point to some promising directions for future research. ******************** Biography: Le Song is an assistant professor in the College of Computing, Georgia Institute of Technology. He received his Ph.D. in Machine Learning from University of Sydney and NICTA in 2008, and then conducted his post-doctoral research in the Department of Machine Learning, Carnegie Mellon University, between 2008 and 2011. Before he joined Georgia Institute of Technology, he was a research scientist at Google. His principal research direction is machine learning, especially nonparametric and nonlinear models for large scale and complex problems, arising from artificial intelligence, social network analysis, healthcare analytics, computational biology, and other interdisciplinary domains. He is the recipient of the NSF CAREER Award'14, IPDPS'15 Best Paper Award, NIPS'13 Outstanding Paper Award, and ICML'10 Best Paper Award. He has also served as the area chair for leading machine learning conferences such as ICML, NIPS and AISTATS.