Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization 2021
DOI: 10.1145/3450613.3456846
|View full text |Cite
|
Sign up to set email alerts
|

Model-Agnostic Counterfactual Explanations of Recommendations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(22 citation statements)
references
References 20 publications
0
22
0
Order By: Relevance
“…On one hand, post-hoc explainable models [18] consider recommendation and explanation generation as two distinct steps. Methods pertaining to this family either pre-compute paths in the knowledge graph and attach them to the recommended products generated by traditional models [5,1] or generate counterfactual explanations [14,13,9,7]. In both cases, the ranked products are optimized for utility and the training process of the recommendation model does not embed any constraint for the selection of accompanying reasoning paths.…”
Section: Introductionmentioning
confidence: 99%
“…On one hand, post-hoc explainable models [18] consider recommendation and explanation generation as two distinct steps. Methods pertaining to this family either pre-compute paths in the knowledge graph and attach them to the recommended products generated by traditional models [5,1] or generate counterfactual explanations [14,13,9,7]. In both cases, the ranked products are optimized for utility and the training process of the recommendation model does not embed any constraint for the selection of accompanying reasoning paths.…”
Section: Introductionmentioning
confidence: 99%
“…), the common evaluation metrics are NDCG, Precision, recall and coverage [171,235,262,263,316]. For the evaluation of counterfactual explanation, common metrics are Average Treatment Effect (ATE), replacement, Probablity of Sufficiency (PS) and Probability of Necessity (PN) [56,147,265,266,275]. Some other evaluation metrics include Perplexity, Mean Explanation Precision (MEP), Mean Explanation Recall (MER) as well as influence [1,187,192,224].…”
Section: Evaluation Of Explanationsmentioning
confidence: 99%
“…• Causal explanations. Recently, there are several works [111,147,266,275] which explore the counterfactual explainable recommender system from the causal perspective. However, finergrained quantification of causal explanations is desired and yet to be explored, such as quantifying the causal treatment effect of the explanations and extracting the causally strongest explanations accordingly.…”
Section: Open Problems and Relationship With Other Trustworthy Perspe...mentioning
confidence: 99%
“…They use the influence function to compute how the predicted score on the explained item might change if some training items were removed. Furthermore, Kaffes et al [34] propose a black-box solution that performs Breadth First Search with heuristics that combine search length and drop of the item rank if the candidate set were considered.…”
Section: Counterfactual Machine Learningmentioning
confidence: 99%