2001
DOI: 10.1007/s00453-001-0060-4
|View full text |Cite
|
Sign up to set email alerts
|

Bounding the Inefficiency of Length-Restricted Prefix Codes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
24
0

Year Published

2004
2004
2010
2010

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(24 citation statements)
references
References 20 publications
0
24
0
Order By: Relevance
“…However, since a Huffman tree can be very deep (height n−1 for a very skewed distribution), this would compromise our time bound. Therefore, we use a an O(log σ)-restricted Huffman tree [30], which yields both the space and time bounds we want.…”
Section: Top-k Queriesmentioning
confidence: 99%
“…However, since a Huffman tree can be very deep (height n−1 for a very skewed distribution), this would compromise our time bound. Therefore, we use a an O(log σ)-restricted Huffman tree [30], which yields both the space and time bounds we want.…”
Section: Top-k Queriesmentioning
confidence: 99%
“…It follows from Milidiú and Laber's bound [15] that, for any with 0 < < 1/2, there is always a prefix code with maximum codeword length L = log n + log φ (1/ ) + 1 and expected codeword length within an additive…”
Section: Additive Increase In Expected Codeword Lengthmentioning
confidence: 99%
“…Given a constant c > 1, we use Milidiú and Laber's algorithm [15] to build a prefix code with maximum codeword length L = log n + 1/(c − 1) + 1. We call a character's codeword short if it has length at most L/c + 2, and long otherwise.…”
Section: Multiplicative Increase In Expected Codeword Lengthmentioning
confidence: 99%
See 1 more Smart Citation
“…Finding fast algorithms for computing optimal depth constrained binary trees (without the alphabetic constraint) is known to be a hard problem and good approximate solutions are appearing only now [18], [12], [19], almost 40 years after the original Huffman algorithm. Imposing the alphabetic constraint renders the problem even harder [20], [21], [22], [23].…”
mentioning
confidence: 99%