Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval 2021
DOI: 10.1145/3404835.3463005
|View full text |Cite
|
Sign up to set email alerts
|

Counterfactual Explanations for Neural Recommenders

Abstract: While neural recommenders have become the state-of-the-art in recent years, the complexity of deep models still makes the generation of tangible explanations for end users a challenging problem. Existing methods are usually based on attention distributions over a variety of features, which are still questionable regarding their suitability as explanations, and rather unwieldy to grasp for an end user. Counterfactual explanations based on a small set of the user's own actions have been shown to be an acceptable… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(11 citation statements)
references
References 27 publications
0
11
0
Order By: Relevance
“…On one hand, post-hoc explainable models [18] consider recommendation and explanation generation as two distinct steps. Methods pertaining to this family either pre-compute paths in the knowledge graph and attach them to the recommended products generated by traditional models [5,1] or generate counterfactual explanations [14,13,9,7]. In both cases, the ranked products are optimized for utility and the training process of the recommendation model does not embed any constraint for the selection of accompanying reasoning paths.…”
Section: Introductionmentioning
confidence: 99%
“…On one hand, post-hoc explainable models [18] consider recommendation and explanation generation as two distinct steps. Methods pertaining to this family either pre-compute paths in the knowledge graph and attach them to the recommended products generated by traditional models [5,1] or generate counterfactual explanations [14,13,9,7]. In both cases, the ranked products are optimized for utility and the training process of the recommendation model does not embed any constraint for the selection of accompanying reasoning paths.…”
Section: Introductionmentioning
confidence: 99%
“…The reason being that rumours involve human perception, which is difficult to capture with discrete rules [7], whereas feature-based approaches fail to capture the graph-based structures of rumour propagation [8]. Counter-factual explanations, in turn, focus on features that may change the result of rumour detection rather than the features that led to the detection result in the first place [9].…”
Section: Related Workmentioning
confidence: 99%
“…Feature-based explanations, in turn, cannot capture the graphbased propagation structures of rumours [8]. Also, counterfactual approaches are not suited to understand why entities are classified as being part of a rumour, due to their focus on features that may change the result [9]. As such, explanations shall be based on a set of related examples, which enable users to generalize their properties [10].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Wachter et al [117] modelled the process of identifying counterfactual examples as an optimization problem which ensures that the resultant examples have desired output and are close to the instance to be explained in the input space. Their idea has inspired subsequent research works [119,120]. Sharma et al [118] proposed a meta-heuristic evolutionary algorithm to identify counterfactual instances.…”
Section: Xai In Healthcarementioning
confidence: 99%