2023
DOI: 10.1214/21-ba1302
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Learning Rate Selection Methods in Generalized Bayesian Inference

Abstract: Generalized Bayes posterior distributions are formed by putting a fractional power on the likelihood before combining with the prior via Bayes's formula. This fractional power, which is often viewed as a remedy for potential model misspecification bias, is called the learning rate, and a number of data-driven learning rate selection methods have been proposed in the recent literature. Each of these proposals has a different focus, a different target they aim to achieve, which makes them difficult to compare. I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 41 publications
0
8
0
Order By: Relevance
“…This provides a heuristic motivation for the default β = 1. However, in a misspecified setting smaller values of β are needed to avoid over‐confidence in the generalised posterior, taking misspecification into account; see the recent review of Wu and Martin (2020). Here we aim to pick β such that the scale of the asymptotic precision matrix of the generalised posterior (H; Theorem 2) matches that of the minimum KSD point estimator (HJ1H; Lemma 4), an approach proposed in Lyddon et al (2019).…”
Section: Default Settings For Ksd‐bayesmentioning
confidence: 99%
“…This provides a heuristic motivation for the default β = 1. However, in a misspecified setting smaller values of β are needed to avoid over‐confidence in the generalised posterior, taking misspecification into account; see the recent review of Wu and Martin (2020). Here we aim to pick β such that the scale of the asymptotic precision matrix of the generalised posterior (H; Theorem 2) matches that of the minimum KSD point estimator (HJ1H; Lemma 4), an approach proposed in Lyddon et al (2019).…”
Section: Default Settings For Ksd‐bayesmentioning
confidence: 99%
“…There are various approaches to estimation of η that lead to the posterior distribution concentrating at θ ⋆ , e.g. [17] and [31]; see a review [45]. Here I will give a very brief discussion.…”
Section: Estimation Of ηmentioning
confidence: 99%
“…At η = 1 η-SMI is standard Bayesian inference and at η = 0 η-SMI reproduces the Cut-model. Carmona and Nicholls (2020) suggest choosing η maximizing the expected log pointwise predictive density (ELPD) though this choice is not an essential part of their method and other criteria (Wu and Martin, 2020) may be more appropriate in different settings. Liu and Goudie (2021) adapt η-SMI for Geographically Weighted Regression using an influence parameter across like-lihood factors which is modeled as a function of distance between the spatial observation locations.…”
Section: Introductionmentioning
confidence: 99%