2020
DOI: 10.48550/arxiv.2006.00305
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RelEx: A Model-Agnostic Relational Model Explainer

Abstract: In recent years, considerable progress has been made on improving the interpretability of machine learning models. This is essential, as complex deep learning models with millions of parameters produce state of the art results, but it can be nearly impossible to explain their predictions. While various explainability techniques have achieved impressive results, nearly all of them assume each data instance to be independent and identically distributed (iid). This excludes relational models, such as Statistical … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 9 publications
0
8
0
Order By: Relevance
“…After that, Zhang et al [34] proposed a model-agnostic relational model explainer called RelEx, which treats the underlying model as a black-box model and learns relational explanations. The RelEx constructs explanations using two steps-learning a local differentiable approximation of the black-box model and then learning an interpretable mask over the local approximation with the use of subgraphs.…”
Section: Interpretable Gnn Through Subgraphsmentioning
confidence: 99%
See 1 more Smart Citation
“…After that, Zhang et al [34] proposed a model-agnostic relational model explainer called RelEx, which treats the underlying model as a black-box model and learns relational explanations. The RelEx constructs explanations using two steps-learning a local differentiable approximation of the black-box model and then learning an interpretable mask over the local approximation with the use of subgraphs.…”
Section: Interpretable Gnn Through Subgraphsmentioning
confidence: 99%
“…Robustness means the explanations of interpretation methods resist attacks such as input corruption/perturbation, adversarial attack and model manipulation. A robust interpretation method can provide similar explanations despite the presence of such attacks [11,34].…”
Section: Evaluation Metrics For Explaniable Techniquesmentioning
confidence: 99%
“…Moreover, we conduct ablation studies and sensitivity analysis in Appendix G to better understand the model components and validate the effectiveness of the designed objective. (Luo et al, 2020;Ying et al, 2019;Yuan et al, 2020a;Yue Zhang, 2020;Michael Sejr Schlichtkrull, 2021) learns the masks on graph features. Typically, GNN-Explainer (Ying et al, 2019) applies the instance-wise masks on the messages carried by graph structures, and maximizes the mutual information between the masked graph and the prediction.…”
Section: Study Of Generators (Rq2)mentioning
confidence: 99%
“…As discussed in Section 1 not much effort has been devoted to explainability in graph classification. Explanation methods for node classification and link prediction have been proposed in [16,43,44], but this is a different problem from the graph classification task in which the aim is to classify the whole graph and not its nodes.…”
Section: Related Workmentioning
confidence: 99%