More about HKUST
Efficient Reasoning for Large Reasoning Model
PhD Qualifying Examination
Title: "Efficient Reasoning for Large Reasoning Model"
by
Mr. Haoyue ZHANG
Abstract:
The rapid advancement of Large Language Models (LLMs) has significantly
enhanced their capabilities in natural language understanding and complex
reasoning tasks. However, the extensive use of Chain-of-Thought (CoT)
prompting often leads to overly verbose reasoning processes, resulting in
high computational costs and latency that hinder deployment in real-time,
computation-sensitive applications. This paper provides a comprehensive
survey on efficient reasoning methodologies for Large Reasoning Models
(LRMs). In this paper, we introduce training-based approaches, including
Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), as well as
innovative reasoning paradigms such as Prompt-guided efficient reasoning,
switching of thinking, collaborative reasoning, and latent reasoning.
Additionally, evaluation metrics and benchmarks tailored to assess reasoning
efficiency are also examined. Finally, future research directions aimed at
further optimizing reasoning processes in LRMs are discussed, including
ensuring multi-modal application, scalability, reducing resource
consumption, and addressing associated safety concerns.
Date: Thursday, 14 August 2025
Time: 9:00am - 11:00am
Venue: Room 3494
Lifts 25/26
Committee Members: Prof. Song Guo (Supervisor)
Prof. Raymond Wong (Chairperson)
Prof. Ke Yi