Towards Panoramic Explainability of Graph Neural Networks

The Hong Kong University of Science and Technology
Department of Computer Science and Engineering


PhD Thesis Defence


Title: "Towards Panoramic Explainability of Graph Neural Networks"

By

Miss Ge LV


Abstract

Graph Neural Networks (GNNs) have demonstrated outstanding effectiveness on 
both heterogeneous and homogeneous graphs. However, their black-box nature 
prohibits human users from comprehending their working mechanisms. Recent 
efforts have been dedicated to explaining GNNs’ predictions. Existing methods 
can be classified into two groups based on the scale of their output 
explanations: global explainer and local explainer. In this thesis, our 
investigation delves into the panoramic explainability of GNNs, encompassing 
both global and local perspectives.

To achieve global explainability, we introduce a data-aware explainer called 
DAGExplainer. Specifically, we observe three properties of superior 
explanations for a pretrained GNN: they should be highly recognized by the 
model, compliant with the data distribution, and discriminative among all the 
classes. The first property entails an explanation to be faithful to the model, 
and the other two require the explanation to be convincing regarding the data 
distribution. Guided by these properties, we design metrics to quantify the 
quality of each single explanation and formulate the problem of finding data 
aware global explanations for a pretrained GNN as an optimizing problem. We 
prove that the problem is NP-hard and adopt a randomized greedy algorithm to 
find a near-optimal solution. Furthermore, we derive an improved bound of the 
approximation algorithm in our problem over the state-of-the-art (SOTA) best.

Additionally, we propose a local explainer to explore heterogeneity-agnostic 
multilevel explainability. Since both heterogeneous and homogeneous graphs are 
irreplaceable in real-life applications, having a more general and end-to-end 
explainer becomes a natural and inevitable choice. In the meantime, 
feature-level explanation is often ignored by existing techniques, while 
topological-level explanation alone can be incomplete and deceptive. Thus, we 
propose a heterogeneity-agnostic multi-level explainer, named HENCE-X, which is 
a causality-guided method that can capture the non-linear dependencies of model 
behavior on the input using conditional probabilities. Theoretical proof is 
provided to show that HENCE-X is capable of identifying the Markov blanket of 
the explained prediction. This implies that all the information on which the 
prediction depends is accurately identified.

Experimental results demonstrate the superior performance of the proposed 
approaches in generating faithful explanations for GNNs, surpassing SOTA 
techniques. Further directions for improving the explainability of GNNs are 
discussed as future research.


Date:			Friday, 3 November 2023

Time:			12:00pm - 2:00pm

Venue:			Room 5510
 			lifts 25/26

Chairperson:		Prof. Ivan IP (MATH)

Committee Members:	Prof. Lei CHEN (Supervisor)
 			Prof. Junxian HE
 			Prof. Ke YI
 			Prof. Amy FU (LIFS)
 			Prof. Qing LI (PolyU)


**** ALL are Welcome ****