Proceedings of the 29th ACM International Conference on Information &Amp; Knowledge Management 2020
DOI: 10.1145/3340531.3411954
|View full text |Cite
|
Sign up to set email alerts
|

S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization

Abstract: Recently, significant progress has been made in sequential recommendation with deep learning. Existing neural sequential recommendation models usually rely on the item prediction loss to learn model parameters or data representations. However, the model trained with this loss is prone to suffer from data sparsity problem. Since it overemphasizes the final performance, the association or fusion between context data and sequence data has not been well captured and utilized for sequential recommendation. To tackl… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
341
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 544 publications
(342 citation statements)
references
References 19 publications
1
341
0
Order By: Relevance
“…As the research of self-supervised learning is still in its infancy, there are only several works combining it with recommender systems [24,44,45,64]. These efforts either mine self-supervision signals from future/surrounding sequential data [24,45], or mask attributes of items/users to learn correlations of the raw data [64]. However, these thoughts cannot be easily adopted to social recommendation where temporal factors and attributes may not be available.…”
Section: Self-supervised Learningmentioning
confidence: 99%
“…As the research of self-supervised learning is still in its infancy, there are only several works combining it with recommender systems [24,44,45,64]. These efforts either mine self-supervision signals from future/surrounding sequential data [24,45], or mask attributes of items/users to learn correlations of the raw data [64]. However, these thoughts cannot be easily adopted to social recommendation where temporal factors and attributes may not be available.…”
Section: Self-supervised Learningmentioning
confidence: 99%
“…However, the transitivity assumption of the messages is not always valid due to the existence of negative edges. Inspired by recent self-learning work with MI maximization [57,58] and knowledge graph embedding (KGE) [59][60][61], we propose a relation representation learning framework via signed graph mutual information maximization, and then use learned vector representation as the input of neural networks to perform the task of trust prediction.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Considering the effectiveness of contrastive learning in various fileds, recently constrastive learning has also introduced into recommender systems for learning the accurate representation of users and items. For instance, Zhou et al propose to maximize the mutual information between different forms or granularities of the item sequence to enhance the item representation learning for improving the sequential recommendation task [27]. Moreover, Xie et al propose to enhance the supervised learning with constrastive learning in a pre-training way, which extracts the self-supervision signals by contrasting the same item sequence from various views generated by different augmentation methods [28].…”
Section: Contrastive Learningmentioning
confidence: 99%