2016
DOI: 10.1007/s11336-016-9530-0
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Plackett–Luce Mixture Models for Partially Ranked Data

Abstract: The elicitation of an ordinal judgment on multiple alternatives is often required in many psychological and behavioral experiments to investigate preference/choice orientation of a specific population. The Plackett-Luce model is one of the most popular and frequently applied parametric distributions to analyze rankings of a finite set of items. The present work introduces a Bayesian finite mixture of Plackett-Luce models to account for unobserved sample heterogeneity of partially ranked data. We describe an ef… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 43 publications
(55 citation statements)
references
References 30 publications
0
55
0
Order By: Relevance
“…They used the marginal probability for an item to be relevant as a measure of the importance of the item, and proposed a Bayesian approach to estimating the probabilities for all the items. Finally, the aggregated ranking is determined by ordering the probabilities for all the items. Unlike the supervised rank aggregation methods, the weighted rank aggregation method proposed by Desarkar et al () is an unsupervised method for rank aggregation by assigning different weights to different rankers according to their ranking qualities measured in terms of their own agreements with “majority” of rankers. Motivated from the fact that rankings can be transformed into pairwise preferences, Volkovs and Zemel () proposed the multinomial preference model (MPM) for unsupervised aggregation, a new score‐based model for pairwise preferences and extended MPM for supervised aggregation. Rank aggregation methods for heterogeneous ranking data include EM algorithm for mixtures of (weighted) distance‐based models (Lee & Yu, ; Murphy & Martin, ), Bayesian inference for Mallows Mixture model (Meilă & Chen, ; Vitelli et al, ), Bayesian inference for Mixtures of Plackett–Luce model (Caron et al, ; Mollica & Tardella, ).…”
Section: Rank Aggregation Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…They used the marginal probability for an item to be relevant as a measure of the importance of the item, and proposed a Bayesian approach to estimating the probabilities for all the items. Finally, the aggregated ranking is determined by ordering the probabilities for all the items. Unlike the supervised rank aggregation methods, the weighted rank aggregation method proposed by Desarkar et al () is an unsupervised method for rank aggregation by assigning different weights to different rankers according to their ranking qualities measured in terms of their own agreements with “majority” of rankers. Motivated from the fact that rankings can be transformed into pairwise preferences, Volkovs and Zemel () proposed the multinomial preference model (MPM) for unsupervised aggregation, a new score‐based model for pairwise preferences and extended MPM for supervised aggregation. Rank aggregation methods for heterogeneous ranking data include EM algorithm for mixtures of (weighted) distance‐based models (Lee & Yu, ; Murphy & Martin, ), Bayesian inference for Mallows Mixture model (Meilă & Chen, ; Vitelli et al, ), Bayesian inference for Mixtures of Plackett–Luce model (Caron et al, ; Mollica & Tardella, ).…”
Section: Rank Aggregation Methodsmentioning
confidence: 99%
“…7. Rank aggregation methods for heterogeneous ranking data include EM algorithm for mixtures of (weighted) distancebased models (Lee & Yu, 2012;Murphy & Martin, 2003), Bayesian inference for Mallows Mixture model (Meil a & Chen, 2010;Vitelli et al, 2018), Bayesian inference for Mixtures of Plackett-Luce model (Caron et al, 2014;Mollica & Tardella, 2017).…”
Section: Rank Aggregation Methodsmentioning
confidence: 99%
“…To account for the latent group structure z, Mollica and Tardella (2017) generalize Caron and Doucet (2012)'s approach with the following conjugate Bayesian model setup…”
Section: The Finite Pl Mixturementioning
confidence: 99%
“…, see Mollica and Tardella (2017) for more analytical details. The MAP solution represents a suitable starting point to initialize the GS algorithm.…”
Section: Gibbs Samplingmentioning
confidence: 99%
“…The basic premise is that the observed data should look like a plausible realisation from the (posterior) predictive distribution. Posterior predictive checks for models for ranked data have received relatively little attention in the literature, although Mollica and Tardella (2016) provide some guidance. Here, rather than focusing on particular test quantities, we aim to directly assess the full predictive distribution as follows.…”
Section: Model Assessment Via Posterior Predictive Checksmentioning
confidence: 99%