2019 IEEE Information Theory Workshop (ITW) 2019
DOI: 10.1109/itw44776.2019.8989292
|View full text |Cite
|
Sign up to set email alerts
|

A Tight Upper Bound on Mutual Information

Abstract: We derive a tight lower bound on equivocation (conditional entropy), or equivalently a tight upper bound on mutual information between a signal variable and channel outputs. The bound is in terms of the joint distribution of the signals and maximum a posteriori decodes (most probable signals given channel output). As part of our derivation, we describe the key properties of the distribution of signals, channel outputs and decodes, that minimizes equivocation and maximizes mutual information. This work addresse… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 8 publications
(15 reference statements)
0
5
0
Order By: Relevance
“…For discrete inputs, classic work in information theory proved a number of upper bounds on this gap when the channel is known [46], with the Feder-Merhav bound perhaps being the most well known [34]; Feder-Merhav provides an upper bound on the channel capacity given the overall probability of error in MAP decoding. In a separate work [35], we computed a new upper bound on information I UB (U ; X) that is consistent with not just the overall probability of error as in Feder-Merhav bound, but with the full confusion matrix obtained from optimal MAP decoding, and showed that the new bound is tight.…”
Section: Maximum a Posteriori Upper Bound (Ub)mentioning
confidence: 72%
See 2 more Smart Citations
“…For discrete inputs, classic work in information theory proved a number of upper bounds on this gap when the channel is known [46], with the Feder-Merhav bound perhaps being the most well known [34]; Feder-Merhav provides an upper bound on the channel capacity given the overall probability of error in MAP decoding. In a separate work [35], we computed a new upper bound on information I UB (U ; X) that is consistent with not just the overall probability of error as in Feder-Merhav bound, but with the full confusion matrix obtained from optimal MAP decoding, and showed that the new bound is tight.…”
Section: Maximum a Posteriori Upper Bound (Ub)mentioning
confidence: 72%
“…This decoding gives the lowest average probability of error and the corresponding information lower bound can be used as a benchmark for information estimates derived from other model-free decoding approaches (that have at least the error probability of the MAP decoder); in Section 2.6 we compare Support Vector Machine (SVM), Gaussian Decoding (GD) and Neural Network (NN) decoding approaches. Upper bounds like the Feder-Merhav bound [34] and our improvement on it [35] complete the picture by estimating the gap between optimal decoding-derived and exact information values (Section 2.5).…”
Section: Exact Information Calculations For Fully Observed Reaction N...mentioning
confidence: 99%
See 1 more Smart Citation
“…where I min and I max are pre-defined constant values between zero and the maximum mutual information between x and y given any joint distribution p(x, y) ∈ P XY whose marginals satisfy x p(x, y) = p(y) and y p(x, y) = q(x). Hledík et al (2019) prove that the maximum mutual information max…”
Section: Mutual Information Constraintmentioning
confidence: 94%
“…To overcome this issue, various machine-learning models, such as neural networks, can be used for classifiers to improve estimates [99]. In addition to the lower bound, an upper bound on the mutual information was derived [135].…”
Section: The Decoding-based Approachmentioning
confidence: 99%