Fifteenth ACM Conference on Recommender Systems 2021
DOI: 10.1145/3460231.3475943
|View full text |Cite
|
Sign up to set email alerts
|

A Case Study on Sampling Strategies for Evaluating Neural Sequential Item Recommendation Models

Abstract: At the present time, sequential item recommendation models are compared by calculating metrics on a small item subset (target set) to speed up computation. The target set contains the relevant item and a set of negative items that are sampled from the full item set. Two well-known strategies to sample negative items are uniform random sampling and sampling by popularity to better approximate the item frequency distribution in the dataset. Most recently published papers on sequential item recommendation rely on… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(14 citation statements)
references
References 31 publications
5
9
0
Order By: Relevance
“…2, we note that general magnitudes of the reported effectiveness results are smaller than those reported in [42] indeed, as stated in Section 5.4, in contrast to [42], we follow recent advice [5,20] to avoid sampled metrics, instead preferring the more accurate unsampled metrics. The magnitudes of effectiveness reported for MovieLens-20M are in line with those reported by [9] (e.g. a Recall@10 of 0.137 for SASRec-vanilla is reported in [9] when also using a Leave-One-Out evaluation scheme and unsampled metrics).…”
Section: Data Splitting and Evaluation Measuressupporting
confidence: 81%
“…2, we note that general magnitudes of the reported effectiveness results are smaller than those reported in [42] indeed, as stated in Section 5.4, in contrast to [42], we follow recent advice [5,20] to avoid sampled metrics, instead preferring the more accurate unsampled metrics. The magnitudes of effectiveness reported for MovieLens-20M are in line with those reported by [9] (e.g. a Recall@10 of 0.137 for SASRec-vanilla is reported in [9] when also using a Leave-One-Out evaluation scheme and unsampled metrics).…”
Section: Data Splitting and Evaluation Measuressupporting
confidence: 81%
“…On first inspection of Table 2, we note that general magnitudes of the reported effectiveness results are smaller than those reported in [42] -indeed, as stated in Section 5.4, in contrast to [42], we follow recent advice [5,20] to avoid sampled metrics, instead preferring the more accurate unsampled metrics. The magnitudes of effectiveness reported for MovieLens-20M are in line with those reported by [9] (e.g. a Recall@10 of 0.137 for SASRec-vanilla is reported in [9] when also using a Leave-One-Out evaluation scheme and unsampled metrics).…”
Section: Rq1 Benefit Of Recency Samplingsupporting
confidence: 81%
“…The magnitudes of effectiveness reported for MovieLens-20M are in line with those reported by [9] (e.g. a Recall@10 of 0.137 for SASRec-vanilla is reported in [9] when also using a Leave-One-Out evaluation scheme and unsampled metrics).…”
Section: Rq1 Benefit Of Recency Samplingsupporting
confidence: 81%
“…We use an adaptation of the original code for this model. 9 For SASRec we set sequence length to 50, embedding size to 50 and use 2 transformer blocks; according to the experiments conducted by Kang et al [24], these parameters are within the range where SASRec shows reasonable performance.…”
Section: Modelsmentioning
confidence: 99%