2017
DOI: 10.1002/cpe.4283
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive loss‐less data compression method optimized for GPU decompression

Abstract: Summary There is no doubt that data compression is very important in computer engineering. However, most lossless data compression and decompression algorithms are very hard to parallelize, because they use dictionaries updated sequentially. The main contribution of this paper is to present a new lossless data compression method that we call adaptive loss‐less (ALL) data compression. It is designed so that the data compression ratio is moderate, but decompression can be performed very efficiently on the graphi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
9
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…However, practically all LDC and decompression algorithms are shown to be extremely inefficient when parallelized, as they rely on sequentially updated dictionaries. The primary objective of [ 16 ] was to develop a novel LDC model called adaptive lossless (ALL) data compression. When utilized with graphics processing units (GPU), the model was constructed in such a way that the data compression ratio was reasonable, while decompression was efficient.…”
Section: Related Workmentioning
confidence: 99%
“…However, practically all LDC and decompression algorithms are shown to be extremely inefficient when parallelized, as they rely on sequentially updated dictionaries. The primary objective of [ 16 ] was to develop a novel LDC model called adaptive lossless (ALL) data compression. When utilized with graphics processing units (GPU), the model was constructed in such a way that the data compression ratio was reasonable, while decompression was efficient.…”
Section: Related Workmentioning
confidence: 99%
“…Instead, Gompresso [48] utilizes a slightly modified LZ77 algorithm and partition-based Huffman coding to improve the compression/decompression throughput on GPUs. Similarly, Adaptive Loss-Less (ALL) data compression [12] exploits run-length and adaptive dictionarybased coding for GPUs, as well as a partition-based compression/decompression scheme to improve throughput. However, both research target general-purpose applications; hence, they are sub-optimal for DNN training.…”
Section: Related Workmentioning
confidence: 99%
“…The performance of compression is important for sensors, so the studies of parallel computing and hardware acceleration such as ASIC (application-specific integrated circuit) and FPGA (field-programmable gate array) [16][17][18] are valuable. A hotspot of researches is the GPU (graphics processing unit) acceleration [19][20][21]. But as we mentioned in [4], the problem is the parallel threads split the data window into 3 Journal of Sensors smaller slices and then reduce the compression ratio.…”
Section: Journal Of Sensorsmentioning
confidence: 99%