2020
DOI: 10.1007/s10844-020-00631-8
|View full text |Cite
|
Sign up to set email alerts
|

CAESAR: context-aware explanation based on supervised attention for service recommendations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

3
6

Authors

Journals

citations
Cited by 22 publications
(10 citation statements)
references
References 46 publications
0
10
0
Order By: Relevance
“…The former investigates how people perceive different styles of explanation [8,9,19], while the latter provides explanations by designing new explainable recommendation algorithms, to which our work is more related. There exist various types of explanation style, such as pre-defined templates [24,49,60], item features [20,54], ranked text [5,12,27], image visualizations [10], knowledge graph paths [1,18,55,56], and reasoning rules [7,46,63], but in this work we focus on generating natural language explanations. Previous works [6,13,26,57] mostly rely on RNN, e.g., LSTM [22] and GRU [14], or unpretrained Transformer [29], leaving the potentially more effective pre-trained models under-explored, which motivates this work.…”
Section: Explainable Recommendationmentioning
confidence: 99%
See 1 more Smart Citation
“…The former investigates how people perceive different styles of explanation [8,9,19], while the latter provides explanations by designing new explainable recommendation algorithms, to which our work is more related. There exist various types of explanation style, such as pre-defined templates [24,49,60], item features [20,54], ranked text [5,12,27], image visualizations [10], knowledge graph paths [1,18,55,56], and reasoning rules [7,46,63], but in this work we focus on generating natural language explanations. Previous works [6,13,26,57] mostly rely on RNN, e.g., LSTM [22] and GRU [14], or unpretrained Transformer [29], leaving the potentially more effective pre-trained models under-explored, which motivates this work.…”
Section: Explainable Recommendationmentioning
confidence: 99%
“…Meanwhile, explanations that justify why these recommendations are made are becoming more and more important, as they can help users make better and faster decisions, increase the system's ease of use, and gain their trust in the system [50,58]. There is a variety of explanation style, such as pre-defined templates [24,49,60], highlighted image regions [10] and automatically generated sentences [6,26,29]. The last type has gained increasing attention recently, mainly due to the availability of textual data on online commercial platforms, such as Amazon and Yelp, which encourage users to express their opinions by writing reviews (see Fig.…”
Section: Introductionmentioning
confidence: 99%
“…The former (Gedikli et al, 2014;Chen and Wang, 2017;Chen et al, 2019b) investigates how people perceive different styles of explanations, while the latter provides explanations by designing new explainable recommendation algorithms, to which our work is more related. There exist various types of explanation styles, such as pre-defined templates (Zhang et al, 2014;Li et al, 2020a), ranked sentences (Chen et al, 2019d;, image visualizations (Chen et al, 2019c), knowledge graph paths Xian et al, 2019;, reasoning rules (Shi et al, 2020;Zhu et al, 2021), etc., among which, recently, generated natural language explanations (Ni et al, 2019;Li et al, 2020c) have received much attention, mainly owing to the advancement of natural language generation technology and the availability of textual data on recommendation platforms such as e-commerce. However, previous works mostly rely on recurrent neural networks (RNN), e.g., LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Cho et al, 2014), leaving the potentially more effective Transformer under-explored, which motivates this work.…”
Section: Related Workmentioning
confidence: 99%
“…Compared with traditional RS algorithms, e.g., collaborative filtering [16,17], which aim to tackle the information overload problem for users, explainable RS can further improve users' satisfaction and overall experience [18] by helping them better understand recommended items. However, as explanations can take various forms, such as pre-defined template [9,24], generated text [3,10] and path on knowledge graph [6,21], it is difficult to evaluate the explanations produced by different methods.…”
Section: Usersmentioning
confidence: 99%