More about HKUST
DNN security from the perspective of ML infrastructures
PhD Qualifying Examination
Title: "DNN security from the perspective of ML infrastructures"
by
Mr. Yanzuo CHEN
Abstract:
Deep neural networks (DNNs) are paramount in advancing the capabilities across
various domains of artificial intelligence like natural language processing,
computer vision, and autonomous driving. In the realization and operational
deployment of these DNN models, machine learning (ML) infrastructures,
encompassing components such as deep learning (DL) frameworks, runtime
environments, and computing hardware, play a crucial role. As such, their
security is essential, and current research has shown that attackers can
exploit vulnerabilities in these infrastructures to cause severe damage,
including privacy leaks, corrupted model behaviors, and distorted outputs.
Defenses have also been proposed to mitigate these threats and enhance the
robustness of ML infrastructures and DNN applications.
This survey aims to provide a comprehensive review of existing research on the
security of ML infrastructures and its interaction with DNN models and
applications. We begin by introducing the modern ML infrastructures as they
have rapidly evolved over the past decade. Then, we present the current attack
and defense works, categorizing them from multiple aspects including their
objectives, attacked and defended targets, and the underlying techniques. We
conclude by discussing the limitations and extensions of existing research,
providing insights into future research directions in this field.
Date: Monday, 13 May 2024
Time: 3:00pm - 5:00pm
Venue: Room 3494
Lifts 25/26
Committee Members: Dr. Shaui Wang (Supervisor)
Dr. Binhang Yuan (Chairperson)
Dr. Dimitris Papadopoulos
Dr. Wei Wang