Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval 2021
DOI: 10.1145/3404835.3463036
|View full text |Cite
|
Sign up to set email alerts
|

Augmenting Sequential Recommendation with Pseudo-Prior Items via Reversely Pre-training Transformer

Abstract: Sequential Recommendation characterizes the evolving patterns by modeling item sequences chronologically. The essential target of it is to capture the item transition correlations. The recent developments of transformer inspire the community to design effective sequence encoders, e.g., SASRec and BERT4Rec. However, we observe that these transformer-based models suffer from the cold-start issue, i.e., performing poorly for short sequences. Therefore, we propose to augment short sequences while still preserving … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

1
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 98 publications
(29 citation statements)
references
References 32 publications
1
28
0
Order By: Relevance
“…SR encodes users' dynamic interests by modeling item-item transition relationships in sequences. Recent advancements in Transformer [24,42] introduce the self-attention mechanism to reveal the position-wise item-item relationships, which leads to the stateof-the-art performance in SR. SASRec is the pioneering work in proposing Transformer for sequential recommendation, which applies scaled dot-product self-attention to learn item-item correlation weights. BERT4Rec [38] adopts bi-directional modeling in sequences.…”
Section: Introductionmentioning
confidence: 99%
“…SR encodes users' dynamic interests by modeling item-item transition relationships in sequences. Recent advancements in Transformer [24,42] introduce the self-attention mechanism to reveal the position-wise item-item relationships, which leads to the stateof-the-art performance in SR. SASRec is the pioneering work in proposing Transformer for sequential recommendation, which applies scaled dot-product self-attention to learn item-item correlation weights. BERT4Rec [38] adopts bi-directional modeling in sequences.…”
Section: Introductionmentioning
confidence: 99%
“…The sequential recommendation (Fan et al, 2021;Liu et al, 2021c;Chen et al, 2018;Tang & Wang, 2018;Zheng et al, 2019) aims at predicting future items in sequences, where the crucial part is to characterize item relationships in sequences. Recent developments in sequence modeling (Fan et al, 2021;Liu et al, 2021c) verify the superiority of Transform (Vaswani et al, 2017), i.e. the selfattention mechanism, in revealing item correlations in sequences.…”
Section: Introductionmentioning
confidence: 99%
“…However, the data sparsity issue (Liu et al, 2021c) and noise in sequences undermine the performance of a model in sequential recommendation. The former hinders performance due to insufficient training since the complex structure of a sequential model requires a dense corpus to be adequately trained.…”
Section: Introductionmentioning
confidence: 99%
“…If we can generate high-quality pseudo-labeled examples for these few-shot intents, we can effectively alleviate this issue and improve the performance. There are only a few previous works [11,12,16,26] that try to augment the training data with generation methods and alleviate the scarce annotation problem. However, these models utilize simple neural networks with limited model capacity, like LSTMs [7], to do text generation.…”
Section: Introductionmentioning
confidence: 99%