Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval 2017
DOI: 10.1145/3077136.3080779
|View full text |Cite
|
Sign up to set email alerts
|

Embedding Factorization Models for Jointly Recommending Items and User Generated Lists

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
86
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 114 publications
(86 citation statements)
references
References 31 publications
0
86
0
Order By: Relevance
“…He et al [17] propose a hierarchical self-attentive model for recommending user-generated item lists (e.g., book lists and playlists) to right users. Besides, the List Recommending Model in [26] is proposed for recommending book lists and the Embedding Factorization Model in [3] is for recommending song playlists.…”
Section: Related Workmentioning
confidence: 99%
“…He et al [17] propose a hierarchical self-attentive model for recommending user-generated item lists (e.g., book lists and playlists) to right users. Besides, the List Recommending Model in [26] is proposed for recommending book lists and the Embedding Factorization Model in [3] is for recommending song playlists.…”
Section: Related Workmentioning
confidence: 99%
“…Following the preprocessing setup in [3,23], we filter out items appearing in fewer than 5 lists. To study the impact of data density on recommendation performance, we also filter out users who have interacted with lists fewer than 5 times on Spotify and Zhihu, so that they are denser than the Goodreads dataset.…”
Section: Datasets: Goodreads Spotify and Zhihumentioning
confidence: 99%
“…The first factor is the inner product between user latent factors and list latent factors; the second factor is the sum of inner products between the user factor and item latent factors within the list. [3] is also a Bayesianbased pair-wise model designed for list recommendation. Inspired by paragraph2vec model in [18], it calculates the shifted positive pointwise mutual information (SPPMI) value between lists and items and uses this information to boost the ranking performance.…”
Section: Baselinesmentioning
confidence: 99%
See 1 more Smart Citation
“…Another angle from latent feature analysis is location-aware topic modelling, i.e., finding implicit locations for entity names [4,8], again through extensions or additions for location to existing topic models. Preprocessing is needed to incorporate the location semantics.…”
Section: Introductionmentioning
confidence: 99%