2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01039
|View full text |Cite
|
Sign up to set email alerts
|

Asymmetric Gained Deep Image Compression With Continuous Rate Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
58
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 109 publications
(58 citation statements)
references
References 16 publications
0
58
0
Order By: Relevance
“…Quantization gains are derived from the multi-rate codec proposed in [18]. For each frame type f ∈ {I, P, B}, a feature-wise pair of gains (Γ enc f , Γ dec f ) is learned.…”
Section: Variable Quantization Gainsmentioning
confidence: 99%
“…Quantization gains are derived from the multi-rate codec proposed in [18]. For each frame type f ∈ {I, P, B}, a feature-wise pair of gains (Γ enc f , Γ dec f ) is learned.…”
Section: Variable Quantization Gainsmentioning
confidence: 99%
“…Currently, most prevailing neural image codecs follow the VAE framework [5,6]. A series of works are built upon this framework, improving from the aspects of entropy estimation [11,18,26,40], quantization [1,3,5,19,51], variable rate [12,13] and perceptual quality [4,38]. Among them, we note that the autoregressive context model [26,40] can achieve obvious rate savings but bring much more decoding complexity.…”
Section: Lossy Image Compressionmentioning
confidence: 99%
“…To address this limitation, advanced methods [8,28,32] propose conditional entropy models where the elements are assumed to follow conditionally independent parametric probability models, and the distribution parameters are adapted by utilizing the remaining dependencies. They can be divided into two directions: what parametric models to be used [8,16,17,32] and how to model dependencies [8,28,32,37]. The former direction includes zero-mean Gaussian [8], Gaussian [32], Gaussian mixture [16], and asymmetric Gaussian [17].…”
Section: Learned Entropy Modelsmentioning
confidence: 99%
“…They can be divided into two directions: what parametric models to be used [8,16,17,32] and how to model dependencies [8,28,32,37]. The former direction includes zero-mean Gaussian [8], Gaussian [32], Gaussian mixture [16], and asymmetric Gaussian [17]. Among them, we employ the widely used one, i.e., Gaussian [32].…”
Section: Learned Entropy Modelsmentioning
confidence: 99%