2019
DOI: 10.1111/rssb.12333
|View full text |Cite
|
Sign up to set email alerts
|

On Choosing Mixture Components via Non-Local Priors

Abstract: Summary Choosing the number of mixture components remains an elusive challenge. Model selection criteria can be either overly liberal or conservative and return poorly separated components of limited practical use. We formalize non‐local priors (NLPs) for mixtures and show how they lead to well‐separated components with non‐negligible weight, interpretable as distinct subpopulations. We also propose an estimator for posterior model probabilities under local priors and NLPs, showing that Bayes factors are ratio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 57 publications
(136 reference statements)
1
10
0
Order By: Relevance
“…Non-local priors possess appealing properties for Bayesian model selection. They discard spurious parameters faster as the sample size n grows, but preserve exponential rates to detect important coefficients (Johnson and Rossell, 2010;Fúquene et al, 2018) and can lead to improved parameter estimation shrinkage (Rossell and Telesca, 2017). To illustrate the motivation for NLPs in our setting consider Figure 1.…”
Section: Non-local Spike-and-slab Priormentioning
confidence: 99%
“…Non-local priors possess appealing properties for Bayesian model selection. They discard spurious parameters faster as the sample size n grows, but preserve exponential rates to detect important coefficients (Johnson and Rossell, 2010;Fúquene et al, 2018) and can lead to improved parameter estimation shrinkage (Rossell and Telesca, 2017). To illustrate the motivation for NLPs in our setting consider Figure 1.…”
Section: Non-local Spike-and-slab Priormentioning
confidence: 99%
“…The relative entropy function D has values equal to D (μ || ν) = dμ log (dμ/dν), the entropy of a probability measure μ relative to a measure ν that dominates μ (e.g., Maas, 2017). Complementary statistical applications of relative entropy include the prevention of overfitting models (Fúquene et al, 2016;Gelman et al, 2017), the idealization of Cromwell's rule for revising priors (Bickel, 2018), and the automatic construction of unsharpened priors (Section 8, Example 7).…”
Section: Adjusting Priors For the Simplicity Of Data Pdfsmentioning
confidence: 99%
“…This assumption, although convenient for mathematical tractability, is often an oversimplification and might produce misleading results in producing too many clusters. This motivated Petralia et al (2012), Xu et al (2016), Fúquene et al (2019), Quinlan et al (2020), Bianchini et al (2020), and Xie and Xu (2019) to explicitly define prior models with repulsion between the locations, thereby obtaining well separated components.…”
Section: Previous Work On Repulsive Mixture Modelsmentioning
confidence: 99%
“…In Petralia et al (2012), Fúquene et al (2019), andQuinlan et al (2020), m is finite and fixed, but, as mentioned before, this cannot guarantee posterior consistency of the number of components. However, Xu et al (2016), Bianchini et al (2020), and Xie and Xu (2019) assumed m to be finite and random.…”
Section: Previous Work On Repulsive Mixture Modelsmentioning
confidence: 99%