AI that Knows When It Doesn't Know
Speaker:
Ms. Haoyue BAI
University of Wisconsin–Madison
Title: AI that Knows When It Doesn't Know
Date: Thursday, 19 March 2026
Time: 9:30am - 10:30am
Join Zoom Meeting:
https://hkust.zoom.us/j/96688516988?pwd=OnUqmxqUN3hMAnxHb3OamdsrQzc17d.1
Meeting ID: 966 8851 6988
Passcode: 202627
Abstract:
Modern AI systems achieve impressive accuracy, yet often fail silently when deployed outside their training distributions, producing high-confidence errors, hallucinations, or brittle long-horizon behavior. My research focuses on building AI systems that know when they don’t know, by treating generalization under distribution shift as a deployment-aware reliability problem. I develop principled frameworks that characterize when and why models fail, viewing real-world deployment data as a mixture of in-distribution, covariate-shifted, and semantic-shifted data. This perspective enables joint reasoning about generalization and unknown-aware learning, leading to learning principles, algorithms, and diagnostic benchmarks that safely leverage unlabeled wild data and expose silent failures. More broadly, my goal is to move beyond accuracy-centric evaluation toward AI systems that remain reliable not only when they succeed, but also when they fail, enabling trustworthy deployment in open-world settings.
Biography:
Haoyue Bai is a Ph.D. candidate in the Computer Sciences Department at the University of Wisconsin–Madison, advised by Prof. Robert Nowak. Her research focuses on developing the theoretical and algorithmic foundations of reliable and trustworthy AI, with an emphasis on generalization under distribution shift, open-world robustness, and safe reasoning in foundation models. She develops theoretical and algorithmic methods that leverage post-deployment wild data and principled uncertainty to enable AI systems to generalize reliably, detect unknowns, and behave safely in open-world settings. She is a recipient of the OpenAI Superalignment Fellowship.