More about HKUST
Communication Quantization Algorithms in Distributed Deep Learning
PhD Qualifying Examination
Title: "Communication Quantization Algorithms in Distributed Deep Learning"
by
Mr. Xiaozhe REN
Abstract:
With the increasing scale of deep learning models, communication overhead in
distributed deep learning has become a significant performance factor.
Communication quantization is a crucial technique to reduce communication
costs. This survey presents a systematic review of communication quantization
algorithms in distributed deep learning, categorizing them based on their
mapping strategies into three classes: direct quantization, uniform
quantization, and non-uniform quantization. Direct quantization methods
compress gradients without explicit mapping functions, offering high
compression rates with little computational overhead. Uniform mapping methods
transform gradient values using evenly-spaced quantization levels, balancing
between compression ratio and accuracy. Non-uniform mapping approaches
utilize adaptive quantization levels to better preserve the distribution of
gradient values than the other approaches. For each category, we
systematically analyze the motivation, design principles, and technical
innovations of representative algorithms, providing a comprehensive
understanding of their development and contributions. Additionally, we
examine these methods from both theoretical and practical perspectives,
considering their convergence guarantees and applicability to large language
model training. Furthermore, we discuss the challenges in current approaches
as well as possible future research directions.
Date: Friday, 20 December 2024
Time: 4:00pm - 6:00pm
Venue: Room 2128A
Lift 19
Committee Members: Prof. Qiong Luo (Supervisor)
Prof. Gary Chan (Chairperson)
Prof. Kai Chen
Dr. Xiaomin Ouyang