Proceedings of the 13th ACM Conference on Recommender Systems 2019
DOI: 10.1145/3298689.3347036
|View full text |Cite
|
Sign up to set email alerts
|

Variational low rank multinomials for collaborative filtering with side-information

Abstract: We are interested in Bayesian models for collaborative filtering that incorporate side-information or metadata about items in addition to user-item interaction data. We present a simple and flexible framework to build models for this task that exploit the low-rank structure in user-item interaction datasets. Although the resulting models are non-conjugate, we develop an efficient technique for approximating posteriors over model parameters using variational inference. We borrow the "re-parameterization trick" … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…Scalability of model training and prediction/inference: Scaling deep‐learning model‐training not only depends on a specialized software/hardware stack but also on the characteristics of the datasets. For example, consider multiclass classification, which is a very important task in recommendation systems (Elahi et al 2019; Liang et al 2018). Multiclass classification is usually done by using a softmax activation in the output‐layer of deep‐learning models.…”
Section: Practical Challengesmentioning
confidence: 99%
“…Scalability of model training and prediction/inference: Scaling deep‐learning model‐training not only depends on a specialized software/hardware stack but also on the characteristics of the datasets. For example, consider multiclass classification, which is a very important task in recommendation systems (Elahi et al 2019; Liang et al 2018). Multiclass classification is usually done by using a softmax activation in the output‐layer of deep‐learning models.…”
Section: Practical Challengesmentioning
confidence: 99%
“…We present additive and collective ease r (add-ease r and cease r ), and show how these novel methods retain a closed-form solution whilst leveraging signals embedded in side-information to generate more effective recommendations. We show how these straightforward and complementary extensions of the ease r paradigm consistently outperform state-of-the-art approaches such as cvae [3] and vlm [9]. Additionally, we empirically validate that add-ease r and cease r are indeed able to soften the effect of the long tail, and are more likely to recommend different and less popular items than plain ease r .…”
Section: Introductionmentioning
confidence: 65%
“…Several hurdles for recommender systems remain, such as the łlong tailž (very few items account for the large majority of interactions) and łcold startž (new items do not have any interactions) issues [22,27,30]. It has become common practice to exploit item side-information or metadata to try and alleviate these problems, and several recent works show that they indeed succeed at this [3,9]. In this work, we study the applicability of ease r -like models in the presence of such metadata.…”
Section: Introductionmentioning
confidence: 99%
“…We also ind that some papers [123,132] only perform the iltering in an unsymmetrical way, e.g., only for users or items. Besides, several papers [27,34] set a relatively large value for n (e.g., n = 25, 30), which can derive a more dense dataset. However, it might signiicantly change the overall distributions of user-item interactions of the original dataset.…”
Section: Dataset Selection and Preprocessingmentioning
confidence: 99%