2022
DOI: 10.1109/tpami.2022.3204236
|View full text |Cite
|
Sign up to set email alerts
|

Explainability in Graph Neural Networks: A Taxonomic Survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
152
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 206 publications
(153 citation statements)
references
References 55 publications
0
152
0
1
Order By: Relevance
“…There have been only limited works studying the explainability of GNNs. In contrast to CNN-based approaches where explanations are usually provided at pixel-level [85], for graph data the focus is on the structural information, i.e., the identification of the salient nodes and/or edges contributing the most to the GNN classification decision [86]. In the following, we briefly survey techniques most relevant to ours, i.e., targeting graph classification tasks and providing nodelevel (rather than edge-level) explanations.…”
Section: B Gnn Decision Explanationmentioning
confidence: 99%
See 1 more Smart Citation
“…There have been only limited works studying the explainability of GNNs. In contrast to CNN-based approaches where explanations are usually provided at pixel-level [85], for graph data the focus is on the structural information, i.e., the identification of the salient nodes and/or edges contributing the most to the GNN classification decision [86]. In the following, we briefly survey techniques most relevant to ours, i.e., targeting graph classification tasks and providing nodelevel (rather than edge-level) explanations.…”
Section: B Gnn Decision Explanationmentioning
confidence: 99%
“…In the following, we briefly survey techniques most relevant to ours, i.e., targeting graph classification tasks and providing nodelevel (rather than edge-level) explanations. For a broader survey of various works on explainability the interested reader is referred to [86]. In [87], for each test instance the so-called GNNExplainer maximizes the mutual information between the GNN's prediction and a set of generated subgraph structures to learn a soft mask for selecting the nodes explaining the model's outcome.…”
Section: B Gnn Decision Explanationmentioning
confidence: 99%
“…Explanation methods for GNNs. Several GNN XAI approaches have been proposed, and a recent survey of the most relevant work is presented in [53].…”
Section: Related Workmentioning
confidence: 99%
“…Since then, a multitude of instance-level explanation methods of GNNs has been proposed in the past few years. According to a recent survey [10], these instance-level explanation methods can be classified into four categories: gradient-based methods [26,27], perturbation-based methods [14,13,28], decomposition methods [26,27,29], and surrogate methods [15,30,31]. These four categories all attempt to explain how a GNN model makes such a prediction for a specific input instance.…”
Section: Related Workmentioning
confidence: 99%
“…However, the explainability of deep learning models on graphs is still less explored. Compared with explaining deep learning models on text or image data, explaining deep graph models is a more challenging task for several reasons [10]: (i) since a graph is not a grid-structured data like the image or text, the locality information of nodes is absent and each node has a varying number of neighbors [11], (ii) the adjacency matrix representing the topological information has only discrete values, which cannot be directly optimized via gradient-based methods [12], and (iii) graph data structure is heterogeneous in nature with different types of node features and edge features, which makes developing a one-sizefits-all explanation method for GNNs to be even more challenging.…”
Section: Introductionmentioning
confidence: 99%