Causal AI for Transferable, Interpretable, and Controllable Machine Learning

Speaker: Mr. KONG Lingjing
Carnegie Mellon University

Title: Causal AI for Transferable, Interpretable, and Controllable Machine Learning

Date: Monday, 9 March 2026

Time: 9:30am - 10:30am

Join Zoom Meeting: https://hkust.zoom.us/j/96688516988?pwd=OnUqmxqUN3hMAnxHb3OamdsrQzc17d.1
Meeting ID: 966 8851 6988
Passcode: 202627

Abstract:

Foundation models are rapidly becoming capable assistants for knowledge work. Still, their deployment in real settings is limited by three gaps: they do not transfer reliably across environments, their internal reasoning is opaque, and their behavior is hard to control precisely. In this talk, I argue that these limitations are not only about model size—they are fundamentally about whether learning captures and leverages the underlying structure of the data-generating process. I use causal thinking as a practical lens to model what is invariant, what changes, and what can be intervened on, and I further show how this leads to learning principles that improve trustworthiness.


Biography:

Lingjing Kong is a Ph.D. candidate in the Computer Science Department at Carnegie Mellon University. His research focuses on Causal AI for transferable, interpretable, and controllable systems, with an emphasis on understanding and exploiting the structure of real-world data to make foundation models actionable and more reliable. He develops identification principles and scalable algorithms for learning unified models from heterogeneous data, uncovering hierarchical concept structures in unstructured data (e.g, images and text), and generalizing beyond training support through compositionality and extrapolation. His work has appeared in top ML venues including ICML, NeurIPS, CVPR, ICLR, and EMNLP, and has been prototyped and applied in industry through research internships.