2022
DOI: 10.21203/rs.3.rs-1838229/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adaptation of the Tuning Parameter in General Bayesian Inference with Robust Divergence

Abstract: We introduce a novel methodology for robust Bayesian estimation with robust divergence (e.g., density power divergence or γ-divergence), indexed by tuning parameters. It is well known that the posterior density induced by robust divergence gives highly robust estimators against outliers if the tuning parameter is appropriately and carefully chosen. In a Bayesian framework, one way to find the optimal tuning parameter would be using evidence (marginal likelihood). However, we theoretically and numerically illus… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 38 publications
0
1
0
Order By: Relevance
“…There is also a growing literature following Basu et al (1998) that applies the Tsallis score to define a robust loss function to fit any given probability model. Such robustification depends on a hyper‐parameter that governs robustness‐efficiency trade‐offs and often leads to an improper model similar to Figure 1 (see Figure S1, and Yonekura and Sugasawa (2021) for a recent pre‐print building on our improper model interpretation to address the hyper‐parameter selection). In these scenarios traditional model selection tools are not applicable to choose the more appropriate loss.…”
Section: Introductionmentioning
confidence: 99%
“…There is also a growing literature following Basu et al (1998) that applies the Tsallis score to define a robust loss function to fit any given probability model. Such robustification depends on a hyper‐parameter that governs robustness‐efficiency trade‐offs and often leads to an improper model similar to Figure 1 (see Figure S1, and Yonekura and Sugasawa (2021) for a recent pre‐print building on our improper model interpretation to address the hyper‐parameter selection). In these scenarios traditional model selection tools are not applicable to choose the more appropriate loss.…”
Section: Introductionmentioning
confidence: 99%