2023
DOI: 10.1109/tbdata.2022.3201176
|View full text |Cite
|
Sign up to set email alerts
|

SZ3: A Modular Framework for Composing Prediction-Based Error-Bounded Lossy Compressors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(14 citation statements)
references
References 41 publications
0
14
0
Order By: Relevance
“…The test platform is equipped with two 28-core Intel Xeon Gold 6238R processors and 384 GB of memory. We use SZ3 [40], which is easy to (de)couple lossless encoding from SZ compression, due to its high modularity. Comparison baselines.…”
Section: Methodsmentioning
confidence: 99%
“…The test platform is equipped with two 28-core Intel Xeon Gold 6238R processors and 384 GB of memory. We use SZ3 [40], which is easy to (de)couple lossless encoding from SZ compression, due to its high modularity. Comparison baselines.…”
Section: Methodsmentioning
confidence: 99%
“…On the other hand, for the compression task, we compare our methods to the state-of-the-art compression scheme without neural network (Liang et al, 2022) and the recently proposed coordinate-based neural network for compression (Huang & Hoefler, 2023).…”
Section: Methodsmentioning
confidence: 99%
“…To this end, we combine our method with the existing HNeR-based compression scheme (Ladune et al, 2023) to apply image-wise compression, i.e., we train one neural network for the compression of each image. As a non-machine learning baseline, we consider a modular composable SZ3 (Liang et al, 2022) framework. We also compare with the Fourier feature network with temporal encoding (Huang & Hoefler, 2023, FFN-T) proposed for compression of climate and weather data.…”
Section: Weather and Climate Datamentioning
confidence: 99%
“…Though data reduction is not the only intended use case for our approach, nor is it something we design for specifically (such as with network weight quantization for further data reduction [16]), we believe it is useful to the community to compare the compression ability of state-of-the-art SRNs with state-of-the-art compressors. We compare compression results from TTHRESH [2] and SZ3 [12,13,34] with our approach in Figure 5. We do not compare to a rendering-focused compressor such as cudaCompress [28] or a bricked version of TTHRESH [2], as this has been done by recent work [30] showing that its decoding is significantly slower, compression rates smaller, memory use is higher, and image quality is worse than state-of-the-art SNRs.…”
Section: Scene Representation Network Comparisonmentioning
confidence: 99%