More about HKUST
Adversarial Attack and Defense on Graph Neural Networks
PhD Qualifying Examination
Title: "Adversarial Attack and Defense on Graph Neural Networks"
by
Mr. Haoyang LI
Abstract:
Graph Neural Networks (GNNs) have achieved great success in various graph
tasks, such as node classification and online recommendation. Despite
these successes, recent studies have revealed that GNNs are vulnerable to
adversarial attacks on graph data, including topology modifications and
feature perturbations. The attackers can slightly manipulate graph data to
mislead GNNs into making wrong predictions. Moreover, since the attacker
behaviors will degrade the performance of GNNs and lead to economic loss
in real-world applications, existing researchers propose how to defend
against such adversarial attacks, i.e., GNNs can make predictions
correctly under attacks. Given the importance of graph analysis in
real-world applications, it is necessary to provide a comprehensive survey
of existing adversarial attacks and defenses on GNNs. In this survey, we
first categorize existing adversarial attacks and defenses, and review the
corresponding state-of-the-art methods. Then, we provide future directions
on attacks and defenses.
Date: Monday, 27 March 2023
Time: 2:00pm - 4:00pm
Venue: Room 5501
Lifts 25/26
Committee Members: Prof. Lei Chen (Supervisor)
Prof. Bo Li (Chairperson)
Dr. Dan Xu
Prof. Ke Yi
**** ALL are Welcome ****