2018
DOI: 10.3390/e20120896
|View full text |Cite
|
Sign up to set email alerts
|

Tight Bounds on the Rényi Entropy via Majorization with Applications to Guessing and Compression

Abstract: This paper provides tight bounds on the Rényi entropy of a function of a discrete random variable with a finite number of possible values, where the considered function is not one-to-one. To that end, a tight lower bound on the Rényi entropy of a discrete random variable with a finite support is derived as a function of the size of the support, and the ratio of the maximal to minimal probability masses. This work was inspired by the recently published paper by Cicalese et al.,which is focused on the Shannon en… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
38
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 32 publications
(40 citation statements)
references
References 91 publications
(200 reference statements)
2
38
0
Order By: Relevance
“…To prove our results, we use ideas and techniques from majorization theory [39], a mathematical framework that has been proved to be very much useful in information theory (e.g., see [5], [6], [7], [8], [17], [23], [24], [48] and references therein). In this section we recall the notions and results that are relevant to our context.…”
Section: Preliminary Resultsmentioning
confidence: 99%
“…To prove our results, we use ideas and techniques from majorization theory [39], a mathematical framework that has been proved to be very much useful in information theory (e.g., see [5], [6], [7], [8], [17], [23], [24], [48] and references therein). In this section we recall the notions and results that are relevant to our context.…”
Section: Preliminary Resultsmentioning
confidence: 99%
“…for all t > 0 (although, unlike the asymptotic result in (149), the refined bound for a finite n does not lend itself to a closed-form expression as a function of n; see also [63,Remark 3], which provides such a refinement of the bound on D(Q U n ) for finite n in a different approach). From (141), (146) and (150), it follows similarly to (153) that for all ρ > 1…”
Section: Illustration Of Theorem 7 and Further Resultsmentioning
confidence: 99%
“…the asymptotic result in (149) can be obtained from [63, Lemma 4] and vice versa; however, in [63], the focus is on the Rényi divergence from the equiprobable distribution, whereas the result in (149) is obtained by specializing the asymptotic expression in (134) for a general f -divergence. Note also that the result in [63,Lemma 4] is restricted to α > 0, whereas the result in (149) and (150) covers all values of α ∈ R. 26 In view of (146), (149), (153), (155), and the special cases of the Alpha divergences in (140)-(144), it follows that for all ρ > 1 and for all integer n ≥ 2 max Q∈Pn(ρ)…”
Section: Illustration Of Theorem 7 and Further Resultsmentioning
confidence: 99%
“…The large deviation behavior of guessing was studied in [13,14]. The relation between guessing and variable-length lossless source coding was explored in [3,15,16].…”
Section: Related Workmentioning
confidence: 99%