More about HKUST
Grounding LLM Agents in Knowledge, Context, and Action
The Hong Kong University of Science and Technology
Department of Computer Science and Engineering
PhD Thesis Defence
Title: "Grounding LLM Agents in Knowledge, Context, and Action"
By
Mr. Jiayang CHENG
Abstract:
Large language models (LLMs) are increasingly deployed as autonomous agents,
yet they remain insufficiently grounded in the information they rely on. This
thesis studies LLM agent grounding along three dimensions. For knowledge, we
develop methods for detecting and resolving conflicts among retrieved
evidence, finding that even strong LLMs frequently favor one piece of
conflicting evidence without justification. We further introduce an evaluation
framework for verifying claims that require multiple interdependent pieces of
evidence, revealing that models struggle with partially supported claims and
tend to compensate for missing information using internal knowledge. For
context, we build an interactive benchmark that evaluates how well agents
maintain memory over extended conversations through on-policy interaction,
uncovering significant memory limitations across both standalone LLMs and
LLM-powered agents. For action, we train agents to orchestrate multi-step API
calls through reinforcement learning with a graduated reward design that
decomposes correctness into atomic validity and orchestration consistency,
improving performance on complex tool-use tasks. Collectively, these
contributions provide evaluation tools and training methods that help LLM
agents operate more reliably in real-world settings.
Date: Thursday, 21 May 2026
Time: 10:00am - 12:00noon
Venue: Room 2128A
Lift 19
Chairman: Prof. Vincent Kin Nang LAU (ECE)
Committee Members: Dr. Yangqiu SONG (Supervisor)
Prof. Raymond WONG
Dr. Dan XU
Prof. Bert SHI (ECE)
Dr. Xiao-Ming WU (PolyU)