2021
DOI: 10.48550/arxiv.2105.11601
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Personalized Transformer for Explainable Recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 30 publications
0
7
0
Order By: Relevance
“…• PETER+ [2]: This is an explanation-generating model that uses Transformer. We used it to generate an explanation from the user ID, item ID, features, and reviews for comparison with the effectiveness of the proposed method.…”
Section: Comparison Of Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…• PETER+ [2]: This is an explanation-generating model that uses Transformer. We used it to generate an explanation from the user ID, item ID, features, and reviews for comparison with the effectiveness of the proposed method.…”
Section: Comparison Of Methodsmentioning
confidence: 99%
“…The complexity of this model makes it more accurate and superior to previously popular RNN-based networks. However, the existing explanation generation method only generates an explanation based on a Transformer [2] or RNN [7,47], ignoring the order of user behaviors that play important roles in increasing recommendation accuracy. This study was aimed at addressing this issue.…”
Section: Explanation-generating Approachesmentioning
confidence: 99%
See 2 more Smart Citations
“…For sequential recommendation, we adopt Caser [50], HGN [34], GRU4Rec [21], BERT4Rec [48], FDSA [63], SASRec [24] and S 3 -Rec [67] as baselines for comparison. For explanation generation, we utilize Attn2Seq [15], NRT [30] and PETER [29] as baselines. For review summarization, we adopt pretrained T0 [45] and GPT-2 [41] as baselines.…”
Section: Baselines For Multiple Tasksmentioning
confidence: 99%