Resource-efficient Learning for Large Foundation Models

PhD Qualifying Examination


Title: "Resource-efficient Learning for Large Foundation Models"

by

Mr. Sikai BAI


Abstract:

The success of Large Foundation Models (LFM) is shadowed by their immense 
computational and memory costs, creating a critical bottleneck for practical 
deployment. This survey explores resource-efficient learning, a paradigm that 
shifts the focus from raw model scale to maximizing inference performance 
relative to its deployment costs in latency, memory, and energy. We 
systematically review three primary strategies to achieve this: 1) 
Parameter-Efficient Fine-tuning: Adapting models to new tasks using a minimal 
fraction of trainable parameters, avoiding costly full fine-tuning. 2) Model 
Compression: Aggressively reducing the model's deployment footprint and 
accelerating computation through techniques like quantization, pruning, and 
knowledge distillation. 3) Data-Efficient Learning: Reducing the reliance on 
expensive supervised data to build and align high-performance models. 
Finally, future research aims to apply the philosophy of resource-efficiency 
into novel strategies tailored for large-scale reasoning models, focusing on 
optimizing long-context inference and distilling complex reasoning 
capabilities into smaller, faster architectures. This ensures these powerful 
reasoning tools can be developed and deployed efficiently and responsibly 
across a wider range of applications.


Date:                   Thursday, 28 August 2025

Time:                   3:00pm - 5:00pm

Venue:                  Room 3494
                        Lifts 25/26

Committee Members:      Prof. Song Guo (Supervisor)
                        Prof. Ke Yi (Chairperson)
                        Prof. Long Quan