Chasing Sparsity: From Model, to Algorithm, to Science

Speaker: Tianlong CHEN
         University of Texas at Austin

Title:  "Chasing Sparsity: From Model, to Algorithm, to Science"

Date:   Monday, 13 March 2023

Time:   10:00am - 11:00am HKT

Zoom link:
https://hkust.zoom.us/j/465698645?pwd=aVRaNWs2RHNFcXpnWGlkR05wTTk3UT09

Meeting ID: 465 698 645
Passcode: 20222023


Abstract:

Sparsity is commonly produced from model compression (i.e., pruning),
which eliminates unnecessary parameters in deep neural networks (NN) for
efficiency purposes. However, its significance extends beyond this, as
sparsity has been exploited to model the underlying low dimensionality of
neural networks, and to understand their implicit regularization,
generalization, and robustness. My work demonstrates that learning with
sparsity-aware priors substantially improves performances through a full
stack of applied work on algorithms, systems, and scientific challenges.
In this talk, I will start from (1) efficient sparse models by presenting
a special kind of sparse NN which is capable of universally transferring
across diverse downstream tasks and matches the full accuracy of its dense
counterpart; then I will share more insights about (2) sparsity beyond
efficiency, including boosted generalization and robustness from
sparsity-regularized training; in the end, I will describe what is the
prospect of (3) sparsity for science such as COVID-19 vaccine design and
quantum computing.


****************
Biography:

Tianlong Chen is a final-year Ph.D. candidate in the Electrical and
Computing Engineering department at the University of Texas at Austin. His
research focuses on building accurate and responsible machine learning
(ML) systems. He devotes his most recent passion to learning with sparsity
- which tightly connects to various important topics including ML model
efficiency, reliability, learning to optimize, and interdisciplinary
scientific challenges such as bioengineering and quantum computing. He has
co-authored over 90 papers at top-tier venues (NeurIPS, ICML, ICLR, JMLR,
CVPR, ICCV, ECCV, TPAMI, etc.), including a Best Paper Award at LoG'22.
Tianlong is a recipient of the IBM Ph.D. Fellowship Award, Adobe Ph.D.
Fellowship Award, and Graduated Dean's Prestigious Fellowship.