2021 Data Compression Conference (DCC) 2021
DOI: 10.1109/dcc50243.2021.00051
|View full text |Cite
|
Sign up to set email alerts
|

Improving Run Length Encoding by Preprocessing

Abstract: The Run Length Encoding (RLE) compression method is a long standing simple lossless compression scheme which is easy to implement and achieves a good compression on input data which contains repeating consecutive symbols. In its pure form RLE is not applicable on natural text or other input data with short sequences of identical symbols. We present a combination of preprocessing steps that turn arbitrary input data in a byte-wise encoding into a bit-string which is highly suitable for RLE compression. The main… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…In particular, Zhang et al employ in [ 81 ] a scalar quantization approach, converting non-zero output values represented by 32-bit floating point numbers into lower-precision integer values. Moreover, when it comes to non-DNN-specific compression methods, the research of Zhang et al [ 81 ] emerges again in the analysis, supplementing quantization with the application of a classic compression method, namely, run-length encoding [ 97 ], on intermediate results in an effort to reduce transmission costs even further. In this regard, it is important to note that, while the pursued goal has definitely been to reduce the memory size of the data exchanged between nodes, there has also been an evident concern for avoiding or, at the very least, mitigating information loss in the compression–decompression process, something that we see in [ 81 ] with the use of run-length encoding, and also in [ 91 ], where Parthasarathy et al apply the LZ4 algorithm [ 98 ] for the same purpose, where both of them are lossless data compression methods.…”
Section: Dnn Partitioning and Parallelism For Collaborative Inferencementioning
confidence: 99%
“…In particular, Zhang et al employ in [ 81 ] a scalar quantization approach, converting non-zero output values represented by 32-bit floating point numbers into lower-precision integer values. Moreover, when it comes to non-DNN-specific compression methods, the research of Zhang et al [ 81 ] emerges again in the analysis, supplementing quantization with the application of a classic compression method, namely, run-length encoding [ 97 ], on intermediate results in an effort to reduce transmission costs even further. In this regard, it is important to note that, while the pursued goal has definitely been to reduce the memory size of the data exchanged between nodes, there has also been an evident concern for avoiding or, at the very least, mitigating information loss in the compression–decompression process, something that we see in [ 81 ] with the use of run-length encoding, and also in [ 91 ], where Parthasarathy et al apply the LZ4 algorithm [ 98 ] for the same purpose, where both of them are lossless data compression methods.…”
Section: Dnn Partitioning and Parallelism For Collaborative Inferencementioning
confidence: 99%
“…The Burrows-Wheeler transform (BWT) is a block-sorting data transformation algorithm that is used in data compression (Algorithm 5). It is not a standalone data compression method, however; it is being used as a component with different solutions to increase the performance of other data compression algorithms [63]. BWT rearranges the input data in a way that similar data elements are grouped together.…”
Section: Burrows-wheeler Transform (Bwt)mentioning
confidence: 99%
“…However, an important use of the MTF transform is in Burrows-Wheeler transform-based compression. The Burrows-Wheeler transform is very good at producing a sequence that exhibits local frequency correlation from text and certain other special classes of data [63]. Compression benefits greatly from following up the Burrows-Wheeler transform with an MTF transform before the final step of entropy encoding [73].…”
Section: List) Movetofront(output_arr[i] List) End For Return Output_arrmentioning
confidence: 99%