2015
DOI: 10.1109/tit.2015.2457913
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Error-Based Sequences of Statistical Information Bounds

Abstract: The relation between statistical information and Bayesian error is sharpened by deriving finite sequences of upper and lower bounds on equivocation entropy (EE) in terms of the minimum probability of error (MPE) and related Bayesian quantities. The well known Fano upper bound and Feder-Merhav lower bound on EE are tightened by including a succession of posterior probabilities starting at the largest, which directly controls the MPE, and proceeding to successively lower ones. A number of other interesting resul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 13 publications
0
10
0
Order By: Relevance
“…In the special case of finite alphabets, this result was obtained in [25], [48] and [76]. In the finite alphabet setting, Prasad [58,Section 5] recently refined the bound in (181) by lower bounding H(X|Y ) subject to the knowledge of the first two largest posterior probabilities rather than only the largest one; following the same approach, [58, Section 6] gives a refinement of Fano's inequality. Example 1 (cont.)…”
Section: (180)mentioning
confidence: 99%
“…In the special case of finite alphabets, this result was obtained in [25], [48] and [76]. In the finite alphabet setting, Prasad [58,Section 5] recently refined the bound in (181) by lower bounding H(X|Y ) subject to the knowledge of the first two largest posterior probabilities rather than only the largest one; following the same approach, [58, Section 6] gives a refinement of Fano's inequality. Example 1 (cont.)…”
Section: (180)mentioning
confidence: 99%
“…However, when more constraints on the the joint distribution of X and Y are given, tighter arXiv:1812.01475v2 [cs.IT] 11 Jan 2019 bounds can be obtained. Prasad [8] introduced two series of lower bounds on H(X|Y ) based on partial knowledge of the posterior distribution p(x|y). The first is in terms of the k largest posterior probabilities p(x|y) for each y, that we could label p 1 (y), p 2 (y), .…”
Section: A Equivocation Mutual Information and The Minimal Probabilmentioning
confidence: 99%
“…Using this, Equation 34 can be reexpressed as For discrete parameters, which are not differentiable and therefore do not have a defined Fisher information matrix, Fano's inequality provides a similar lower bound relating Lindley information and estimation error (e.g., Cover & Thomas, 1991;Prasad, 2015):…”
Section: The Relation In Equation 26mentioning
confidence: 99%
“…Interestingly, tight upper bounds to the Lindley information are also available for discrete variables (that is, reversing the direction of the inequality in Equation 37; Feder & Merhav, 1994;Prasad, 2015). This suggests that an even more precise global bounds to the reliability might be obtained, by applying upper as well as lower bounds.…”
Section: Global Minimax Bounds On Reliability and Measurement Potentialmentioning
confidence: 99%