Sample Complexity in Reinforcement Learning

PhD Qualifying Examination


Title: "Sample Complexity in Reinforcement Learning"

by

Mr. Zijun CHEN


Abstract:

Reinforcement Learning (RL) is a fundamental branch of machine learning in 
which an agent learns to make decisions by interacting with an environment 
to maximize its cumulative reward. In practice, collecting samples with 
interaction from real-world environments is often expensive and 
time-consuming. Therefore, understanding the sample complexity-the number of 
interactions required to learn a near-optimal policy or value function has 
become a central issue in both the theoretical and practical aspects of RL 
research. In this survey, we review the historical development and recent 
advances in the study of sample complexity in RL. We begin by examining the 
classical RL setting with discounted rewards, followed by a discussion of 
the average reward criterion. Next, we introduce recent progress in 
distributionally robust reinforcement learning (DR-RL). For each topic, we 
compare model-free and model-based methods, and analyze the underlying 
mathematical principles that govern the behavior of these algorithms. 
Finally, we highlight promising future research directions in this field.


Date:                   Friday, 5 September 2025

Time:                   3:00pm - 5:00pm

Venue:                  Room 5501
                        Lifts 25/26

Committee Members:      Prof. Ke Yi (Supervisor, Chairperson)
                        Dr. Sunil Arya
                        Dr. Nian Si (IEDA)