Legal Reasoning in the Era of Large Language Models: A Survey

PhD Qualifying Examination


Title: "Legal Reasoning in the Era of Large Language Models: A Survey"

by

Mr. Wei FAN


Abstract:

Legal reasoning—defined as the systematic process of deriving legal outcomes 
from factual circumstances through reference to precedent and statutory law, 
and typically structured through methodologies such as rule-based and 
analogy-based reasoning within the Issue, Rule, Application, and Conclusion 
(IRAC) framework—is undergoing notable advancements due to the emergence of 
Large Language Models (LLMs). These models, trained on voluminous legal and 
general-domain corpora, demonstrate impressive capabilities in performing 
complex legal reasoning tasks that have traditionally been challenging for 
conventional Natural Language Processing (NLP) methods. Despite a growing 
body of scholarly work investigating these models, a unified and critical 
overview of LLMs' capabilities in legal reasoning remains lacking. This 
survey aims to provide such a comprehensive examination. We present an 
in-depth overview of their applications in tasks such as judgment prediction 
and legal question answering, and discuss the methods and datasets that 
underpin their development. Furthermore, we scrutinize inherent 
limitations—including hallucinations, data privacy, bias, and 
explainability—and conclude by outlining key directions for future research 
to advance the field of legal reasoning.


Date:                   Friday, 30 May 2025

Time:                   2:00pm - 4:00pm

Venue:                  Room 2128B
                        Lift 19

Committee Members:      Dr. Yangqiu Song (Supervisor)
                        Dr. Shuai Wang (Chairperson)
                        Prof. Ke Yi