2014
DOI: 10.1214/14-ba881
|View full text |Cite
|
Sign up to set email alerts
|

Hellinger Distance and Non-informative Priors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(15 citation statements)
references
References 34 publications
0
15
0
Order By: Relevance
“…This interval is effectively the same as the 95% highest posterior density intervals based on the fiducial and default‐prior (cf. Shemyakin, ) Bayesian posteriors plotted in Figure (b), which are indistinguishable. The two‐sided 95% fiducial interval, (98.49,105.92), reported in Hannig et al (), agrees with the IM plausibility interval obtained by using the “default” random set Eq.…”
Section: Inferential Models For a Class Of Non‐regular Modelsmentioning
confidence: 87%
See 1 more Smart Citation
“…This interval is effectively the same as the 95% highest posterior density intervals based on the fiducial and default‐prior (cf. Shemyakin, ) Bayesian posteriors plotted in Figure (b), which are indistinguishable. The two‐sided 95% fiducial interval, (98.49,105.92), reported in Hannig et al (), agrees with the IM plausibility interval obtained by using the “default” random set Eq.…”
Section: Inferential Models For a Class Of Non‐regular Modelsmentioning
confidence: 87%
“…This corresponds to a ( θ ) = θ and b ( θ ) = θ 2 − θ . Bayesian approaches based on default priors are available for this problem, for example, Berger et al (, Example 9) and Shemyakin (, Example 3). Unlike for the regular scalar‐parameter cases where Jeffreys prior is the agreed‐upon default prior, for this non‐regular problem, there are several priors that claim to be “reference” and no general non‐asymptotic guarantees are available on the coverage probability of the corresponding credible intervals.…”
Section: Inferential Models For a Class Of Non‐regular Modelsmentioning
confidence: 99%
“…The major advantages of the Hellinger distance are threefold: 1) it defines a true metric for probability distributions, as compared to, for example, the Kullback-Leibler divergence; 2) it is computationally simple, as compared to the Wasserstein distance; 3) and it is a special case of the f -divergence, which enjoys many geometric properties and has been used in many statistical applications. For example, Liese (2012) showed that f -divergence can be viewed as the integrated Bayes risk in hypothesis testing where the integral is with respect to a distribution on the prior; Nguyen et al (2009) linked f -divergence to the achievable accuracy in binary classification problems; Jager and Wellner ( 2007) used a subclass of f -divergences for goodness of fit testing; Rao (1995) demonstrated the advantages of the Hellinger metric for graphical representations of contingency table data; Srivastava and Klassen (2016) adopted the Hellinger distance to measure distances between functional and shape data; Shemyakin (2014) showed the connection of the Hellinger distance to Hellinger information, which is useful in nonregular statistical models when Fisher information is not available; and finally, Servidea and Meng (2006) derived an identity between the Hellinger derivative and the Fisher information that is useful for studying the interplay between statistical physics and statistical computation.…”
Section: Algorithm 1 Generating Process For T-ldamentioning
confidence: 99%
“…A prior that maximizes the Kullback–Leibler information is by default a reference prior (for the general discussion of choosing a reference prior, we refer to Jeffreys (1946), Berger, Bernardo, and Sun (2009), and Shemyakin (2014)). The above approach to the construction of the reference prior is due to Bernardo (1979) (an excellent summary is given in Lehmann and Casella 1998 and the recent developments in Berger, Bernardo, and Sun 2009, 2012).…”
Section: Bayesian Solutionmentioning
confidence: 99%