2015
DOI: 10.1109/jstsp.2015.2403800
|View full text |Cite
|
Sign up to set email alerts
|

A Universal Parallel Two-Pass MDL Context Tree Compression Algorithm

Abstract: Computing problems that handle large amounts of data necessitate the use of lossless data compression for efficient storage and transmission. We present a novel lossless universal data compression algorithm that uses parallel computational units to increase the throughput. The length-N input sequence is partitioned into B blocks. Processing each block independently of the other blocks can accelerate the computation by a factor of B, but degrades the compression quality. Instead, our approach is to first estima… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
3
0

Year Published

2016
2016
2017
2017

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 24 publications
1
3
0
Order By: Relevance
“…This was also experimentally confirmed on real data gathered from network traffic [9]. Note that a combination of memory-assisted compression and parallel compression techniques that achieve high compression rate as well as high compression speed make compression-based redundancy elimination feasible on high rate links as well [14], [15].…”
Section: Introductionsupporting
confidence: 59%
See 1 more Smart Citation
“…This was also experimentally confirmed on real data gathered from network traffic [9]. Note that a combination of memory-assisted compression and parallel compression techniques that achieve high compression rate as well as high compression speed make compression-based redundancy elimination feasible on high rate links as well [14], [15].…”
Section: Introductionsupporting
confidence: 59%
“…It turns out that the optimum value is achieved by λ = 1−κ κ n m for which the cost is obtained in (14). For κ = 1, it suffices to use the traditional two-part codes on the S-C link and ignore the memory content at M .…”
Section: Initialization (S and M )mentioning
confidence: 99%
“…where (45) holds by choosing M = N 1/g+δ . To show that the number of covered nodes is concentrated around its mean, we use Proposition 6 again with λ = αE [|∪N r δ (µ i , g)|].…”
Section: Proof Of the Main Resultsmentioning
confidence: 99%
“…We note that improving compression speed and reducing the complexity of compression while maintaining acceptable compression rate is the subject of active research in the compression community (cf. [45] where the authors improve both compression performance and compression speed by using parallel compression). In conclusion, the high-speed compression algorithms are suitable where communication throughput is high and processing power is limited, e.g., 128MB/sec for Ethernet 1 Gigabit/sec connection.…”
Section: Compression Complexitymentioning
confidence: 99%