Proceedings of the ACM Recommender Systems Challenge 2018 2018
DOI: 10.1145/3267471.3267475
|View full text |Cite
|
Sign up to set email alerts
|

Artist-driven layering and user's behaviour impact on recommendations in a playlist continuation scenario

Abstract: In this paper we provide an overview of the approach we used as team Creamy Fireflies for the ACM RecSys Challenge 2018. The competition, organized by Spotify, focuses on the problem of playlist continuation, that is suggesting which tracks the user may add to an existing playlist. The challenge addresses this issue in many use cases, from playlist cold start to playlists already composed by up to a hundred tracks. Our team proposes a solution based on a few well known models both content based and collaborati… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 22 publications
(23 citation statements)
references
References 8 publications
0
23
0
Order By: Relevance
“…Specifically, we included long papers in our analysis that appeared between 2015 and 2018 in the following four conference series: KDD, SIGIR, TheWebConf (WWW), and RecSys. 1 We considered a paper to be relevant if it (a) proposed a deep learning based technique and (b) focused on the top-n recommendation problem. Papers on other recommendation tasks, e.g., group recommendation or session-based recommendation, were not considered in our analysis.…”
Section: Research Methods 21 Collecting Reproducible Papersmentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, we included long papers in our analysis that appeared between 2015 and 2018 in the following four conference series: KDD, SIGIR, TheWebConf (WWW), and RecSys. 1 We considered a paper to be relevant if it (a) proposed a deep learning based technique and (b) focused on the top-n recommendation problem. Papers on other recommendation tasks, e.g., group recommendation or session-based recommendation, were not considered in our analysis.…”
Section: Research Methods 21 Collecting Reproducible Papersmentioning
confidence: 99%
“…For all baseline algorithms and datasets, we determined the optimal parameters via Bayesian search [1] using the implementation of Scikit-Optimize 6 . We explored 35 cases for each algorithm, where the first 5 were used for the initial random points.…”
Section: Evaluation Methodologymentioning
confidence: 99%
“…Secondly, the tuning of the hyper-parameters of the feature weighting machine learning is performed in a similar way, again optimizing MAP. We searched the optimal hyper-parameters via a Bayesian search (Antenucci et al 2018) using the implementation of Scickitoptimize. 13 As for different aggregation methods designed for the audio and visual features, we chose the best performing ones with regards to the metric under study.…”
Section: Hyper-parameter Tuningmentioning
confidence: 99%
“…For each model we report in Table 1 the best MRR achieved on the private validation dataset. The hyperparameters tuning is done via Random and Bayesian search [2,4,5].…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…We publicly release our source code with additional documentation and information on our solution. 2…”
Section: Introductionmentioning
confidence: 99%