LightSecAgg: Rethinking Secure Aggregation in Federated Learning

Speaker: Dr. Songze LI
         Assistant Professor at the Internet of Things Thrust, HKUST (GZ)
         Affiliate Assistant Professor
         Department of Computer Science and Engineering, HKUST (CWB)

Title:  "LightSecAgg: Rethinking Secure Aggregation in Federated Learning"

Date:   Monday, 25 October 2021

Time:   4:00pm - 5:00pm

Venue:  Lecture Theater F (Leung Yat Sing Lecture Theater)
        (near lift 25/26, HKUST)

Zoom link:

Meeting ID:     955 3204 9042
Passcode:       CSE

**Note to CSE PGs with NIHK status, please attend the seminar via zoom**


Secure model aggregation is a key component of federated learning (FL)
that aims at protecting the privacy of each user's individual model, while
allowing their global aggregation. It can be applied to any
aggregation-based approaches, including algorithms for training a global
model (e.g., FedNova, FedProx, FedOpt), as well as personalized FL
frameworks (e.g., pFedMe, Ditto, Per-FedAvg). Model aggregation needs to
also be resilient to likely user dropouts in FL system, making its design
substantially more complex. State-of-the-art secure aggregation protocols
essentially rely on secret sharing of the random-seeds that are used for
mask generations at the users, in order to enable the reconstruction and
cancellation of those belonging to dropped users. The complexity of such
approaches, however, grows substantially with the number of dropped users.
We propose a new approach, named LightSecAgg, to overcome this bottleneck
by turning the focus from "random-seed reconstruction of the dropped
users" to "one-shot aggregate-mask reconstruction of the active users".
More specifically, in LightSecAgg each user protects its local model by
generating a single random mask. This mask is then encoded and shared to
other users, in such a way that the aggregate-mask of any sufficiently
large set of active users can be reconstructed directly at the server via
encoded masks. We show that LightSecAgg achieves the same privacy and
dropout-resiliency guarantees as the state-of-the-art protocols, while
significantly reducing the overhead for resiliency to dropped users.
Furthermore, our system optimization helps to hide the runtime cost of
offline processing by parallelizing it with model training. We evaluate
LightSecAgg via extensive experiments for training diverse models
(logistic regression, shallow CNNs, MobileNetV3, and EfficientNet-B0) on
various datasets (FEMNIST, CIFAR-100, GLD-23K) in a realistic FL system,
and demonstrate that LightSecAgg significantly reduces the total training
time, achieving a performance gain of up to 12.7x over baselines.


Dr. Songze Li is an assistant professor at the Internet of Things thrust
in HKUST (GZ), and an affiliate assistant professor at CSE department in
HKUST (CWB). Before joining HKUST in 2020, Dr. Li worked as a researcher
at Stanford University, on secure and scalable blockchain consensus
protocols. Dr. Li received his Ph.D. degree from University of Southern
California in 2018, and his B.Sc. degree from New York University in 2011,
both in electrical engineering.

Dr. Li's current research interests lie on the intersection of theory and
system of designing efficient, scalable, and secure distributed computing
solutions, particularly for federated learning and blockchain
applications. Among his major contributions, Dr. Li first introduced
leveraging techniques from information/coding theory to design distributed
computing algorithms, which opened up a new research direction of
designing codes to speed up and provide security for computations. Dr. Li
received USC Viterbi School of Engineering Fellowship in 2011. He is among
Qualcomm Innovation Fellowship Finalists in 2017. Dr. Li won Best Paper
Award at NeurIPS-20 Workshop on Scalability, Privacy, and Security in
Federated Learning.