Proceedings of the 30th ACM International Conference on Information &Amp; Knowledge Management 2021
DOI: 10.1145/3459637.3482420
|View full text |Cite
|
Sign up to set email alerts
|

Counterfactual Explainable Recommendation

Abstract: By providing explanations for users and system designers to facilitate better understanding and decision making, explainable recommendation has been an important research problem. In this paper, we propose Counterfactual Explainable Recommendation (CountER), which takes the insights of counterfactual reasoning from causal inference for explainable recommendation. CountER is able to formulate the complexity and the strength of explanations, and it adopts a counterfactual learning framework to seek simple (low c… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
63
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 92 publications
(63 citation statements)
references
References 51 publications
(59 reference statements)
0
63
0
Order By: Relevance
“…The former investigates how people perceive different styles of explanation [8,9,19], while the latter provides explanations by designing new explainable recommendation algorithms, to which our work is more related. There exist various types of explanation style, such as pre-defined templates [24,49,60], item features [20,54], ranked text [5,12,27], image visualizations [10], knowledge graph paths [1,18,55,56], and reasoning rules [7,46,63], but in this work we focus on generating natural language explanations. Previous works [6,13,26,57] mostly rely on RNN, e.g., LSTM [22] and GRU [14], or unpretrained Transformer [29], leaving the potentially more effective pre-trained models under-explored, which motivates this work.…”
Section: Explainable Recommendationmentioning
confidence: 99%
See 1 more Smart Citation
“…The former investigates how people perceive different styles of explanation [8,9,19], while the latter provides explanations by designing new explainable recommendation algorithms, to which our work is more related. There exist various types of explanation style, such as pre-defined templates [24,49,60], item features [20,54], ranked text [5,12,27], image visualizations [10], knowledge graph paths [1,18,55,56], and reasoning rules [7,46,63], but in this work we focus on generating natural language explanations. Previous works [6,13,26,57] mostly rely on RNN, e.g., LSTM [22] and GRU [14], or unpretrained Transformer [29], leaving the potentially more effective pre-trained models under-explored, which motivates this work.…”
Section: Explainable Recommendationmentioning
confidence: 99%
“…Meanwhile, explanations that justify why these recommendations are made are becoming more and more important, as they can help users make better and faster decisions, increase the system's ease of use, and gain their trust in the system [50,58]. There is a variety of explanation style, such as pre-defined templates [24,49,60], highlighted image regions [10] and automatically generated sentences [6,26,29]. The last type has gained increasing attention recently, mainly due to the availability of textual data on online commercial platforms, such as Amazon and Yelp, which encourage users to express their opinions by writing reviews (see Fig.…”
Section: Introductionmentioning
confidence: 99%
“…Explainable AI has been an important topic in recommender systems [5,6,13,36,41,46,47], natural language processing [8,16,20] and computer vision [7,10,15,25,38]. To improve the transparency of deep neural networks, many explanation techniques have been proposed in recent years.…”
Section: Related Work 21 Explainability In Deep Learning and Aimentioning
confidence: 99%
“…They usually map the final prediction to the input space by gradient back-propagation or by linking hidden features to inputs via interpolation. Perturbation/casual-based methods [11,12,15,27,36,37,39] learn the feature importance through observing the change of predictions with respect to the input perturbation. The idea behind these methods are intuitive: determining which part of the inputs are important by either removing the least important information (i.e., pixels in image, words in text, nodes in graph) to keep the model prediction the same (factual reasoning) or removing the most important information to change the model prediction (counterfactual reasoning).…”
Section: Related Work 21 Explainability In Deep Learning and Aimentioning
confidence: 99%
“…Personalized recommender systems (RS), which are extensively employed in e-commerce platforms, have been acknowledged for their capacity to deliver high-quality services that bridge the gap between products and customers [7,17,44,51]. Despite these huge advantages, several recent studies also raised concerns that RS may be vulnerable to algorithmic bias in several aspects, which may result in detrimental consequences for underrepresented or disadvantaged groups [19,29,43,59].…”
Section: Introductionmentioning
confidence: 99%