2019
DOI: 10.48550/arxiv.1902.05539
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Parsimonious Tour of Bayesian Model Uncertainty

Pierre-Alexandre Mattei

Abstract: Modern statistical software and machine learning libraries are enabling semi-automated statistical inference. Within this context, it appears easier and easier to try and fit many models to the data at hand, reversing thereby the Fisherian way of conducting science by collecting data after the scientific hypothesis (and hence the model) has been determined. The renewed goal of the statistician becomes to help the practitioner choose within such large and heterogeneous families of models, a task known as model … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 105 publications
(133 reference statements)
0
3
0
Order By: Relevance
“…If B B A < 1 then the hypothesis A is preferred by the data, otherwise B is favored if B B A > 1. However, this rule is not always straightforward since the estimation of the Bayes' factor might suffer of uncertainties [51,52]. Then, in a realistic scenario, more stringent bounds are required in order to prefer a hypothesis [53].…”
Section: B Model Selectionmentioning
confidence: 99%
“…If B B A < 1 then the hypothesis A is preferred by the data, otherwise B is favored if B B A > 1. However, this rule is not always straightforward since the estimation of the Bayes' factor might suffer of uncertainties [51,52]. Then, in a realistic scenario, more stringent bounds are required in order to prefer a hypothesis [53].…”
Section: B Model Selectionmentioning
confidence: 99%
“…One approach by Graves, Mohamed, and Hinton (2013) is to use a Gaussian variational posterior to approximate the distribution of the weights in a network, but the capacity of the uncertainty representation is limited by the variational distribution. In general we see that MCMC has a higher variance and lower bias in the estimate, while VI has a higher bias but lower variance (Mattei 2020). The preeminent Bayesian deep learning approach by Gal and Ghahramani (2016) showed that variational inference can be approximated without modifying the network.…”
Section: Related Workmentioning
confidence: 75%
“…If B B A < 1 then the hypothesis A is preferred by the data, otherwise B is favored if B B A > 1. However, this rule is not always straightforward since the estimation of the Bayes' factor might suffer of uncertainties [51,52]. Then, more stringent bounds are required in order to prefer a hypothesis [53].…”
Section: B Model Selectionmentioning
confidence: 99%