Proceedings of the 30th ACM International Conference on Information &Amp; Knowledge Management 2021
DOI: 10.1145/3459637.3481952
|View full text |Cite
|
Sign up to set email alerts
|

Self-supervised Learning for Large-scale Item Recommendations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
61
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 158 publications
(61 citation statements)
references
References 23 publications
0
61
0
Order By: Relevance
“…For the most typical dropout setup, we apply dropout only to the fully-connected layers [14,31]. In particular, we neither enable dropout on convolution layers on EMNIST, nor on embeddings [55].…”
Section: Variation Estimation Task Configurationsmentioning
confidence: 99%
“…For the most typical dropout setup, we apply dropout only to the fully-connected layers [14,31]. In particular, we neither enable dropout on convolution layers on EMNIST, nor on embeddings [55].…”
Section: Variation Estimation Task Configurationsmentioning
confidence: 99%
“…To characterize the intrinsic data correlations, S3Rec [44] combines SSL with sequential recommendation by utilizing four MIM objectives, i.e., Item-Attribute MIM, Sequence-Item MIM, Sequence-Attribute MIM, and Sequence-Sequence MIM. In large-scale item recommendations, an auxiliary SSL task is employed to explore feature correlations by applying different feature masking patterns [45]. CL4SRec [46] proposes three data augmentation techniques (i.e., cropping, masking and reordering) from which two methods are randomly sampled and applied to each user sequence.…”
Section: B Self-supervised Learningmentioning
confidence: 99%
“…As a result, similar interests can thus have similar representations (defined as alignment) and sufficient information are kept to distinguish different interests (defined as uniformity). 45 Aug the alignment and uniformity properties are necessary and important for a good SSL system, as proved in [52]. Formally, taking z i,1 p , z i,2 p from the same interest as positive pairs while z i,1 p , z i,2 p and z i,1 p , z i,2 p from different samples as negative pairs, the InfoNCE loss for learning the interest-level correlation is formulated as:…”
Section: B the Miss Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…Secondly, existing GNN based CF approaches rely on explicit interaction links for learning node representations, while high-order relations or constraints (e.g., user or item similarity) cannot be explicitly utilized for enriching the graph information, which has been shown essentially useful in recommendation tasks [24,27,35]. Although several recent studies leverage constative learning to alleviate the sparsity of interaction data [33,39], they construct the contrastive pairs by randomly sampling nodes or corrupting subgraphs. It lacks consideration on how to construct more meaningful contrastive learning tasks tailored for the recommendation task [24,27,35].…”
Section: Introductionmentioning
confidence: 99%