Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining 2023
DOI: 10.1145/3539597.3570419
|View full text |Cite
|
Sign up to set email alerts
|

Disentangled Negative Sampling for Collaborative Filtering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 33 publications
0
10
0
Order By: Relevance
“…1. Specifically, for a positive pair (u, i + ), we follow conventional methods (Chen et al 2022;Lai et al 2023) and adapt the two-pass sampling idea, which first randomly samples a fixed size of uninteracted items to form a candidate set, and then selects a negative sample from the candidate set according to predefined rating functions and sampling rules.…”
Section: Methods Designmentioning
confidence: 99%
See 4 more Smart Citations
“…1. Specifically, for a positive pair (u, i + ), we follow conventional methods (Chen et al 2022;Lai et al 2023) and adapt the two-pass sampling idea, which first randomly samples a fixed size of uninteracted items to form a candidate set, and then selects a negative sample from the candidate set according to predefined rating functions and sampling rules.…”
Section: Methods Designmentioning
confidence: 99%
“…Hard negative sampling methods emphasize the importance of oversampling hard negative samples to speed up the training process and find more precise delineations of user interests. More specifically, it is achieved by either assigning higher sampling probabilities to items with larger predicted scores (Zhang et al 2013;Ding et al 2020;Huang et al 2021;Zhu et al 2022;Lai et al 2023;Shi et al 2023;Zhao et al 2023) or leveraging adversarial learning techniques (Wang et al 2017;Cai and Wang 2018;Park and Chang 2019). For instance, dynamic negative sampling (DNS) (Zhang et al 2013) selects the item with the highest predicted score in a candidate negative sample set.…”
Section: Hard Negative Samplingmentioning
confidence: 99%
See 3 more Smart Citations