2022
DOI: 10.48550/arxiv.2206.05210
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the safe use of prior densities for Bayesian model selection

F. Llorente,
L. Martino,
E. Curbelo
et al.

Abstract: The application of Bayesian inference for the purpose of model selection is very popular nowadays. In this framework, models are compared through their marginal likelihoods, or their quotients, called Bayes factors. However, marginal likelihoods depends on the prior choice. For model selection, even diffuse priors can be actually very informative, unlike for the parameter estimation problem. Furthermore, when the prior is improper, the marginal likelihood of the corresponding model is undetermined. In this wor… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 50 publications
0
4
0
Order By: Relevance
“…Important perspectives for future work include: a detailed theoretical analysis of the convergence properties of proximal nested sampling; an extension to (biased) accelerated proximal methods (Vargas et al 2020); and an analysis of the properties of marginal maximum likelihood estimation for the class of models considered in this paper, such as estimator consistency for model selection in an M-closed setting and concentration in an M-open setting (Llorente et al 2022). Moreover, it would be interesting to apply proximal nested sampling to other types of models, such as models with likelihood-based priors (Llorente et al 2022), which can be handled straightforwardly by proximal nested sampling when the likelihood is log-concave. It would also be interesting to modify proximal nested sampling to tackle high-dimensional models that are multi-modal, particularly models with data-driven priors encoded by neural networks (see e.g.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Important perspectives for future work include: a detailed theoretical analysis of the convergence properties of proximal nested sampling; an extension to (biased) accelerated proximal methods (Vargas et al 2020); and an analysis of the properties of marginal maximum likelihood estimation for the class of models considered in this paper, such as estimator consistency for model selection in an M-closed setting and concentration in an M-open setting (Llorente et al 2022). Moreover, it would be interesting to apply proximal nested sampling to other types of models, such as models with likelihood-based priors (Llorente et al 2022), which can be handled straightforwardly by proximal nested sampling when the likelihood is log-concave. It would also be interesting to modify proximal nested sampling to tackle high-dimensional models that are multi-modal, particularly models with data-driven priors encoded by neural networks (see e.g.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, since the proposed proximal nested sampling approach was specifically designed for models that are logconcave and with Bayesian imaging applications in mind, we anticipate that it will be mostly used with informative priors designed to regularise and stabilise high-dimensional estimation problems. As explained in Llorente et al (2022), the marginal likelihood can be very sensitive to the choice of the prior. Therefore, it is important that the parameters of the prior are chosen carefully.…”
Section: Explicit Iterative Formula For Drawing Samplesmentioning
confidence: 99%
See 1 more Smart Citation
“…Varying searchable parameter space does not pose much of an issue apart from sampling inefficiencies when employing informative priors, as exploration beyond their regions of significant probability density with naturally welldefined support returns little to no information. The same cannot be said for uniform priors, whose normalization biases measurements of the  via Equation (2) (for a detailed study regarding priors and caveats such as this within the context of Bayesian inference and model selection, see Llorente et al 2022).…”
Section: Varying Free Parametersmentioning
confidence: 99%