More about HKUST
Semantic Grounding in Large Language Models: Methods, Challenges, and Applications
PhD Qualifying Examination
Title: "Semantic Grounding in Large Language Models: Methods, Challenges,
and Applications"
by
Mr. Jiayang CHENG
Abstract:
Semantic grounding--the process by which language models connect their
outputs to verifiable knowledge sources and factual information--has emerged
as a critical challenge with the advent of Large Language Models (LLMs).
These models, while demonstrating remarkable generation capabilities across
diverse domains, face fundamental issues with factual accuracy,
hallucination, and lack of transparency in their reasoning processes. We
present an in-depth analysis of core grounding tasks including knowledge
base question answering, retrieval-augmented generation, and fact
verification, examining the evolution from traditional symbolic systems to
modern agentic approaches that incorporate sophisticated reasoning
capabilities. We organize current methods across three key dimensions:
parametric knowledge grounding, context-based grounding, and external
knowledge grounding, analyzing how each addresses challenges such as
hallucination mitigation and source attribution. Furthermore, we identify
critical limitations in existing approaches and outline promising directions
for future research to advance reliable and transparent language generation.
Date: Thursday, 19 June 2025
Time: 9:00am - 11:00am
Venue: Room 3494
Lifts 25/26
Committee Members: Dr. Yangqiu Song (Supervisor)
Prof. Raymond Wong (Chairperson)
Dr. Dan Xu