2022
DOI: 10.48550/arxiv.2201.08802
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deconfounding to Explanation Evaluation in Graph Neural Networks

Abstract: Explainability of graph neural networks (GNNs) aims to answer "Why the GNN made a certain prediction?", which is crucial to interpret the model prediction. The feature attribution framework distributes a GNN's prediction to its input features (e.g., edges), identifying an influential subgraph as the explanation. When evaluating the explanation (i.e., subgraph importance), a standard way is to audit the model prediction based on the subgraph solely. However, we argue that a distribution shift exists between the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 13 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?