2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01905
|View full text |Cite
|
Sign up to set email alerts
|

Deep Stereo Image Compression via Bi-directional Coding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…In this case, the PSNR with (without) the JCT module at the encoder (decoder) improves (drops) by about 0.16dB (0.73dB) at the same bpp level. We further report the compression results when the JCT module is directly replaced by other inter-view fusion operations such as concatenation in Mital et al (2022b), stereo attention module (SAM) in Wödlinger et al (2022) and bi-directional contextual transform module (Bi-CTM) in Lei et al (2022). These operations lead to an increase of the bitrate by 32.73%, 27.99%, 10.11% compared with our method.…”
Section: Ablation Studymentioning
confidence: 93%
See 3 more Smart Citations
“…In this case, the PSNR with (without) the JCT module at the encoder (decoder) improves (drops) by about 0.16dB (0.73dB) at the same bpp level. We further report the compression results when the JCT module is directly replaced by other inter-view fusion operations such as concatenation in Mital et al (2022b), stereo attention module (SAM) in Wödlinger et al (2022) and bi-directional contextual transform module (Bi-CTM) in Lei et al (2022). These operations lead to an increase of the bitrate by 32.73%, 27.99%, 10.11% compared with our method.…”
Section: Ablation Studymentioning
confidence: 93%
“…We also test MV-HEVC (Tech et al, 2015) with the multi-view intra mode. Apart from that, we report the results of several recent DNN-based stereo image codecs on the InStereo2K and Cityscapes datasets, including DSIC (Liu et al, 2019), two variants of HESIC (Deng et al, 2021), BCSIC (Lei et al, 2022), and SASIC (Wödlinger et al, 2022).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Invertible neural networkbased architecture (Cai et al 2022;Helminger et al 2021;Ho et al 2021;Ma et al 2019Ma et al , 2022aXie, Cheng, and Chen 2021) and transformer-based architecture (Qian et al 2022;Zhu, Yang, and Cohen 2022;Zou, Song, and Zhang 2022;Liu, Sun, and Katto 2023) also have been utilized to enhance the modeling capacity of the transforms. Some other works aim to improve the efficiency of entropy coding, e.g., scale hyperprior entropy model (Ballé et al 2018), channel-wise entropy model (Minnen and Singh 2020), context model (Lee, Cho, and Beack 2019;Mentzer et al 2018;Minnen, Ballé, and Toderici 2018), 3D-context model (Guo et al 2020b), multi-scale hyperprior entropy model (Hu et al 2022), discretized Gaussian mixture model (Cheng et al 2020), checkerboard context model (He et al 2021), split hierarchical variational compression (SHVC) (Ryder et al 2022), information transformer (Informer) entropy model (Kim, Heo, and Lee 2022), bi-directional conditional entropy model (Lei et al 2022), unevenly grouped space-channel context model (ELIC) (He et al 2022), neural data-dependent transform (Wang et al 2022a), multi-level cross-channel entropy model (Guo et al 2022), and multivariate Gaussian mixture model . By constructing more accurate entropy models, these methods have achieved greater compression efficiency.…”
Section: Related Workmentioning
confidence: 99%