Artifact Digital Object Group 2021
DOI: 10.1145/3506804
|View full text |Cite
|
Sign up to set email alerts
|

Counterfactual Explanations for Neural Recommenders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(15 citation statements)
references
References 0 publications
0
15
0
Order By: Relevance
“…However, this approach is only applicable to PageRank-based recommender models, and cannot be easily adapted to other model categories. In addition, Tran et al [53] propose a white-box explaining approach on Neural-CF models. The approach is based on the influence function [36] which estimates the influence of a training sample on the current model prediction.…”
Section: Counterfactual Machine Learningmentioning
confidence: 99%
“…However, this approach is only applicable to PageRank-based recommender models, and cannot be easily adapted to other model categories. In addition, Tran et al [53] propose a white-box explaining approach on Neural-CF models. The approach is based on the influence function [36] which estimates the influence of a training sample on the current model prediction.…”
Section: Counterfactual Machine Learningmentioning
confidence: 99%
“…In addition to text-based or image-based explainable recommendation, knowledge-aware explainable recommendation has also attracted research attention recently, such as [3,18,60,63,64]. Works using counterfactual reasoning to improve recommendation explainability [31,[56][57][58]67] have been proposed very recently. Ghazimatin et al [31] tried to generate provider-side counterfactual explanations by looking for a minimal set of user's historical actions (e.g.…”
Section: Explainable Recommendationmentioning
confidence: 99%
“…Xu et al [67] proposed to improve this by using perturbation model to obtain counterfactuals. Tran et al [58] adopted influence functions for identifying training points most relevant to a recommendation while deducing a counterfactual set for explanations. Tan et al [57] proposed to generate and evaluate explanations that considers the causal relations to the outcome.…”
Section: Explainable Recommendationmentioning
confidence: 99%
“…The former mainly leverages inverse propensity scoring [37,51] and doubly robust [21] to debias user feedback. SCMs typically abstract causal relationships into causal graph and estimate causal effects via intervention [48,57] or counterfactual inference [52,55], which are widely used for debiasing [48], explainable [44,46], and out-ofdistribution recommendations [50]. Nevertheless, using causality for diversity or alleviating filter bubbles receives little scrutiny.…”
Section: Related Workmentioning
confidence: 99%