ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
DOI: 10.1109/icassp40776.2020.9053413
|View full text |Cite
|
Sign up to set email alerts
|

Learned Lossless Image Compression with A Hyperprior and Discretized Gaussian Mixture Likelihoods

Abstract: Lossless image compression is an important task in the field of multimedia communication. Traditional image codecs typically support lossless mode, such as WebP, JPEG2000, FLIF. Recently, deep learning based approaches have started to show the potential at this point. HyperPrior is an effective technique proposed for lossy image compression. This paper generalizes the hyperprior from lossy model to lossless compression, and proposes a L2-norm term into the loss function to speed up training procedure. Besides,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…al [34] and Cheng et. al [17] suggested that compressing residual of compressed image with traditional methods is also feasible for lossless image compression with end-to-end models.…”
Section: Learned Lossless Compressionmentioning
confidence: 99%
“…al [34] and Cheng et. al [17] suggested that compressing residual of compressed image with traditional methods is also feasible for lossless image compression with end-to-end models.…”
Section: Learned Lossless Compressionmentioning
confidence: 99%
“…When it comes to lossy picture compression, HyperPrior is one of the best methods available. The training procedure is sped up by adding an L2 norm component to the loss function, as proposed in paper [22], which generalizes the supernormal prior from the lossy model to lossless compression. This work also discusses the use of Gaussian mixture probabilities to construct adaptable and flexible texture models and evaluates various parametric models for hidden codes.…”
Section: Literature Reviewmentioning
confidence: 99%
“…By defining the number of nodes at the end of an encoding subnet, the image is transformed into a feature vector with a small number of elements, thus reducing the redundancy of the image. So far, various kinds of deep learning networks, including recurrent neural networks (RNNs) [5,6], convolutional neural networks (CNNs) [7,8,9,10,11,12,13], and generative adversarial networks (GANs) [14,15], have been explored for image compression. Although these methods have achieved great results in certain datasets, there are still some shortcomings.…”
Section: Introductionmentioning
confidence: 99%