2022
DOI: 10.48550/arxiv.2205.13911
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Pseudo-Mallows for Efficient Probabilistic Preference Learning

Abstract: We propose the Pseudo-Mallows distribution over the set of all permutations of n items, to approximate the posterior distribution with a Mallows likelihood. The Mallows model has been proven to be useful for recommender systems where it can be used to learn personal preferences from highly incomplete data provided by the users. Inference based on MCMC is however slow, preventing its use in real time applications. The Pseudo-Mallows distribution is a product of univariate discrete Mallows-like distributions, co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…The average computing time for Bayes Mallows with C = 16 was 9 h12’, while mixtures with a smaller (larger) number of groups require slightly less (more) computing time. We are currently working on a Variational Bayes alternative to the MCMC algorithm [ 37 ] that samples efficiently from an approximate model; this will significantly speed up computation, as this alternative implementation is scalable in the number of features and of mixture components.…”
Section: Discussionmentioning
confidence: 99%
“…The average computing time for Bayes Mallows with C = 16 was 9 h12’, while mixtures with a smaller (larger) number of groups require slightly less (more) computing time. We are currently working on a Variational Bayes alternative to the MCMC algorithm [ 37 ] that samples efficiently from an approximate model; this will significantly speed up computation, as this alternative implementation is scalable in the number of features and of mixture components.…”
Section: Discussionmentioning
confidence: 99%
“…Nonetheless, we would still need an estimation of its most reasonable value when we are given a new ranking dataset. We propose one possible method for estimating such a value, inspired by Liu et al 34 …”
Section: Lower‐dimensional Bayesian Mallows Modelmentioning
confidence: 99%