2020
DOI: 10.1109/tip.2020.2985225
|View full text |Cite
|
Sign up to set email alerts
|

Efficient and Effective Context-Based Convolutional Entropy Modeling for Image Compression

Abstract: It has long been understood that precisely estimating the probabilistic structure of natural visual images is crucial for image compression. Despite remarkable success of recent end-toend optimized image compression, the latent code representation is assumed to be fully statistically factorized such that the entropy modeling is feasible. Here we describe context-based convolutional networks (CCNs) that exploit statistical redundancies in the codes for improved entropy modeling. We introduce a 3D zigzag coding … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
35
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 65 publications
(36 citation statements)
references
References 34 publications
1
35
0
Order By: Relevance
“…The quantization step will normalize the resulting matrix, taking the psychovisual properties into account [22], [23]. Next, proceed to the encoder process by simply converting the data from one format to another format or pixel-based format to get better image resolution [24], [25]. Lastly, the compressed image is reconstructed with the same resolution but difference data size.…”
Section: Methodsmentioning
confidence: 99%
“…The quantization step will normalize the resulting matrix, taking the psychovisual properties into account [22], [23]. Next, proceed to the encoder process by simply converting the data from one format to another format or pixel-based format to get better image resolution [24], [25]. Lastly, the compressed image is reconstructed with the same resolution but difference data size.…”
Section: Methodsmentioning
confidence: 99%
“…When the compression ratio hit 40 percent, reconstructed images could still be more than 99% identical in terms of MS-SSIM. M. Li et al [83]. Presents context-based convolutional networks (CCNs) to be precise and efficient.…”
Section: Ementioning
confidence: 99%
“…Recent works are mainly in the area of lossy compression, which are based on convolutional neural networks (CNNs) [ 25 , 26 , 27 , 28 , 29 ], recurrent neural networks (RNNs) [ 30 ], generative adversarial networks (GANs) [ 31 ] and the context model [ 32 , 33 , 34 ]. Learning-based lossless image compression methods [ 35 , 36 , 37 , 38 , 39 ] use neural networks instead of the traditional encoder and decoder to achieve image compression. PixelCNN [ 40 ] and PixelCNN++ [ 41 ] as well as the methods based on bits-back coding [ 42 , 43 ] and flow models [ 44 , 45 ] shorten the distance between information theory and machine learning.…”
Section: Introductionmentioning
confidence: 99%