Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3511948
|View full text |Cite
|
Sign up to set email alerts
|

Learning and Evaluating Graph Neural Network Explanations based on Counterfactual and Factual Reasoning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
41
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 57 publications
(42 citation statements)
references
References 18 publications
0
41
0
1
Order By: Relevance
“…3) Reasoning Rationale: From the causal perspective, most of today's explanation methods are based on factual reasoning (e.g., GNNExplainer [22], PGExplainer [56] XGNN [132], RG-Explainer [152], OrphicX [153]) or counterfactual reasoning (e.g., CF-GNNExplainer [162], Gem [155]). One recent study [164] shows that considering only factual reasoning will result in extra information being included in explanations (i.e., sufficient but not necessary explanations), while only considering counterfactual reasoning will break the complement graph of explanations (i.e., necessary but not sufficient explanations). Existing approaches that consider both forms of reasoning (e.g., RCExplainer [136], CF 2 [164]) show superiority in terms of explanation robustness [136] and quality (e.g., accuracy, precision) [164].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…3) Reasoning Rationale: From the causal perspective, most of today's explanation methods are based on factual reasoning (e.g., GNNExplainer [22], PGExplainer [56] XGNN [132], RG-Explainer [152], OrphicX [153]) or counterfactual reasoning (e.g., CF-GNNExplainer [162], Gem [155]). One recent study [164] shows that considering only factual reasoning will result in extra information being included in explanations (i.e., sufficient but not necessary explanations), while only considering counterfactual reasoning will break the complement graph of explanations (i.e., necessary but not sufficient explanations). Existing approaches that consider both forms of reasoning (e.g., RCExplainer [136], CF 2 [164]) show superiority in terms of explanation robustness [136] and quality (e.g., accuracy, precision) [164].…”
Section: Discussionmentioning
confidence: 99%
“…One recent study [164] shows that considering only factual reasoning will result in extra information being included in explanations (i.e., sufficient but not necessary explanations), while only considering counterfactual reasoning will break the complement graph of explanations (i.e., necessary but not sufficient explanations). Existing approaches that consider both forms of reasoning (e.g., RCExplainer [136], CF 2 [164]) show superiority in terms of explanation robustness [136] and quality (e.g., accuracy, precision) [164].…”
Section: Discussionmentioning
confidence: 99%
“…However, an even more fundamental problem is to understand why a model is unfair, i.e., what reasons lead to unfair model outputs. There have been research on explaining recommendation results [44,76,172,196,216,217], explaining graph neural networks [122,171,195,204,205], explaining vision and language models [50,82,87,110,111], etc., but the research on explaining why a model is fair or unfair is still very limited. Understanding the "why" is not only helpful on technical perspectives but also on social perspectives.…”
Section: 38mentioning
confidence: 99%
“…In addition to text-based or image-based explainable recommendation, knowledge-aware explainable recommendation has also attracted research attention recently, such as [3,18,60,63,64]. Works using counterfactual reasoning to improve recommendation explainability [31,[56][57][58]67] have been proposed very recently. Ghazimatin et al [31] tried to generate provider-side counterfactual explanations by looking for a minimal set of user's historical actions (e.g.…”
Section: Explainable Recommendationmentioning
confidence: 99%