Optimal Algorithms for Interactive Settings: Multi-Distribution Learning and Reinforcement Learning
Speaker: Dr. Zihan Zhang
University of Washington
Title: Optimal Algorithms for Interactive Settings: Multi-Distribution Learning and Reinforcement Learning
Date: Monday, 24 February 2025
Time: 10:00am - 11:00am
Join Zoom Meeting:
https://hkust.zoom.us/j/96688516988?pwd=qfj1PQIjEi0I75lwVGfY7PurdPDRBW.1
Meeting ID: 966 8851 6988
Passcode: 202526
Abstract:
In classical supervised learning, optimal sample complexity is well understood with established minimax rates. However, in interactive learning, statistical dependencies in sequential data collection make achieving these guarantees significantly more challenging.
In this talk, I will discuss my work on two important interactive learning settings. (1) Multi-Distribution Learning, where the goal is to find solutions that generalize across diverse tasks—a unifying framework addressing robustness, fairness, and multi-group collaboration. (2) Reinforcement Learning, where an agent interacts with an environment to maximize cumulative rewards, with applications ranging from games and robotics to large language models. For both settings, I will present techniques to decouple statistical dependencies and develop algorithms that achieve optimal sample complexity.
Biography:
Zihan Zhang is currently a postdoc researcher at Paul G. Allen School of CSE, University of Washington. Previously he was a postdoc researcher at Department of ECE, Princeton University. He obtained Ph.D and bachelor degree from Department of Automation, Tsinghua University, respectively at 2022 and 2017. His research mainly focuses on machine learning theory, including reinforcement learning, online learning and game theory.