2022
DOI: 10.1007/978-3-031-19790-1_24
|View full text |Cite
|
Sign up to set email alerts
|

Expanded Adaptive Scaling Normalization for End to End Image Compression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…Some works focus on the transform, e.g., generalized divisive normalization (GDN) Simoncelli 2016a,b, 2017), residual block (Theis et al 2017), attention module (Cheng et al 2020;Zhou et al 2019), non-local attention module (Chen et al 2021), attentional multi-scale back projection (Gao et al 2021), window attention module (Zou, Song, and Zhang 2022), stereo attention module (Wödlinger et al 2022), and expanded adaptive scaling normalization (EASN) (Shin et al 2022) have been used to improve the nonlinear transform. Invertible neural networkbased architecture (Cai et al 2022;Helminger et al 2021;Ho et al 2021;Ma et al 2019Ma et al , 2022aXie, Cheng, and Chen 2021) and transformer-based architecture (Qian et al 2022;Zhu, Yang, and Cohen 2022;Zou, Song, and Zhang 2022;Liu, Sun, and Katto 2023) also have been utilized to enhance the modeling capacity of the transforms.…”
Section: Related Workmentioning
confidence: 99%
“…Some works focus on the transform, e.g., generalized divisive normalization (GDN) Simoncelli 2016a,b, 2017), residual block (Theis et al 2017), attention module (Cheng et al 2020;Zhou et al 2019), non-local attention module (Chen et al 2021), attentional multi-scale back projection (Gao et al 2021), window attention module (Zou, Song, and Zhang 2022), stereo attention module (Wödlinger et al 2022), and expanded adaptive scaling normalization (EASN) (Shin et al 2022) have been used to improve the nonlinear transform. Invertible neural networkbased architecture (Cai et al 2022;Helminger et al 2021;Ho et al 2021;Ma et al 2019Ma et al , 2022aXie, Cheng, and Chen 2021) and transformer-based architecture (Qian et al 2022;Zhu, Yang, and Cohen 2022;Zou, Song, and Zhang 2022;Liu, Sun, and Katto 2023) also have been utilized to enhance the modeling capacity of the transforms.…”
Section: Related Workmentioning
confidence: 99%