2023
DOI: 10.1109/tpami.2022.3170302
|View full text |Cite
|
Sign up to set email alerts
|

Reinforced Causal Explainer for Graph Neural Networks

Abstract: Explainability is crucial for probing graph neural networks (GNNs), answering questions like "Why the GNN model makes a certain prediction?". Feature attribution is a prevalent technique of highlighting the explanatory subgraph in the input graph, which plausibly leads the GNN model to make its prediction. Various attribution methods have been proposed to exploit gradient-like or attention scores as the attributions of edges, then select the salient edges with top attribution scores as the explanation. However… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(6 citation statements)
references
References 28 publications
0
6
0
Order By: Relevance
“…Furthermore, PGM-Explainer [13] presented a probabilistic graphical model so as to provide an explanation by investigating predictions of GNNs when the GNN's input is perturbed. In RC-Explainer [15], a reinforcement learning agent was presented to construct an explanatory subgraph by adding a salient edge to connect the previously selected subgraph at each step, where a reward is obtained according to the causal effect for each edge addition. Most recently, CF-GNNExplainer [14] presented a counterfactual explanation in the form of minimal perturbation to the input graph such that the model prediction changes.…”
Section: Explanation Methods For Gnnsmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, PGM-Explainer [13] presented a probabilistic graphical model so as to provide an explanation by investigating predictions of GNNs when the GNN's input is perturbed. In RC-Explainer [15], a reinforcement learning agent was presented to construct an explanatory subgraph by adding a salient edge to connect the previously selected subgraph at each step, where a reward is obtained according to the causal effect for each edge addition. Most recently, CF-GNNExplainer [14] presented a counterfactual explanation in the form of minimal perturbation to the input graph such that the model prediction changes.…”
Section: Explanation Methods For Gnnsmentioning
confidence: 99%
“…Thus, fostering explainability for GNN models has become of recent interest as it enables a thorough understanding of the model's behavior as well as trust and transparency [3], [4]. Recent attempts to explain GNN models mostly highlight a subgraph structure within a given input graph that contributed most towards the underlying GNN model's prediction [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15]. These so-called instance-level explanation methods for GNNs provide an in-depth analysis given a graph instance [16].…”
Section: Background and Motivationmentioning
confidence: 99%
“…As for the perturbation-based approaches, GNNExplainer Ying et al (2019) is the first specific design for explanation of GNNs, which formulates an optimization task to maximize the mutual information between the GNN predictions and the distribution of poten-tial subgraphs. Unfortunately, GNNExplainer and Causal Screening Wang et al (2020) may lack a global view of explanations and be stuck at local optima. Even though PGExplainer Luo et al (2020) andGraphMask Schlichtkrull et al (2020) could provide some global insights, they require a reparameterization trick and could not guarantee that the outputs of the subgraph are connected, which lacks explanations for the message passing scheme in GNNs.…”
Section: Related Workmentioning
confidence: 99%
“…Instance-level approaches explain models by identifying the most critical input features for their predictions. They have four sub-branches: Gradients/Features-based Zhou et al (2016); Baldassarre & Azizpour (2019); Pope et al (2019), Perturbation-based Ying et al (2019); Luo et al (2020); Schlichtkrull et al (2020); Wang et al (2020), Decompose-based Baldassarre & Azizpour (2019); Schnake et al (2020); Feng et al (2021) and Surrogate-based Vu & Thai (2020); Huang et al (2022); . Some works such as XGNN Yuan et al (2020) and RGExplainer Shan et al (2021) apply reinforcement learning (RL) to model-level and instance-level explanations.…”
Section: Introductionmentioning
confidence: 99%
“…Many attempts have been made to interpret GNN models and explain their predictions [24,31,33,42,50,53]. These methods can be grouped into two categories based on granularity: (1) instance-level explanation, which explains the prediction for each instance by identifying significant substructures [31,50,53], and (2) modellevel explanation, which seeks to understand the global decision rules captured by the GNN [2,24,33].…”
Section: Gnn Explanationmentioning
confidence: 99%