1978
DOI: 10.1109/tit.1978.1055891
|View full text |Cite
|
Sign up to set email alerts
|

Some equivalences between Shannon entropy and Kolmogorov complexity

Abstract: Abstmct-It is known that the expected codeword length L,, of the best uniquely decodable (UD) code satisfies H(X) < L,, < H(X) + 1. LetXbearandomvariablewhichcantakeonnvalues.Thenitisshown that the average codeword length L, :, for the best one-to-one (not necessBluy uniquely decodable) code for X is shorter than the average codeword length L,, for the best mdquely decodable code by no more thau (log2 log, n) + 3. Let Y be a random variable taking OII a fiite or countable number of values and having entropy H.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
47
0

Year Published

1989
1989
2019
2019

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 105 publications
(48 citation statements)
references
References 4 publications
1
47
0
Order By: Relevance
“…Therefore, the stochastic complexity (6) as a model selection criterion is not a ected for large sample situation and mildly a ected otherwise under reparameterization. A nal problem is the term n ;1=4 in our criterion (6). From section 2 we know that it is obtained by an approximation.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, the stochastic complexity (6) as a model selection criterion is not a ected for large sample situation and mildly a ected otherwise under reparameterization. A nal problem is the term n ;1=4 in our criterion (6). From section 2 we know that it is obtained by an approximation.…”
Section: Discussionmentioning
confidence: 99%
“…The result then follows from Shannon's entropy lower bound of w i log 2 (1/w i ) on the average path length of any binary code [27]. The relation between this sum of logs of ranks and entropy is closely related to the efficiency of one-to-one codes in coding theory [9,23]; codes similar to the one shown in Figure 5 have also been used as part of more complex data compression schemes [6,13]. We note that an inequality in the other direction, …”
Section: Approximate Sum Of Subtree Clusteringmentioning
confidence: 98%
“…They are two sides of the same coin. The formal mathematical proof is rather subtle (see Kolmogorov, 1968;Zvonkin and Levin, 1970;Leung-Yan-Cheong and Cover, 1978;Zurek, 1989), but is intuitive and elicits even the most hardened theorists exclamations like "amazing" (Cover and Thomas, 2006;p. 463) and "beautiful" (Li and Vitanyi, 1997;p.…”
Section: An 'Amazing' and 'Beautiful' Factmentioning
confidence: 99%