2017 IEEE International Conference on Multimedia &Amp; Expo Workshops (ICMEW) 2017
DOI: 10.1109/icmew.2017.8026313
|View full text |Cite
|
Sign up to set email alerts
|

Light field compression using depth image based view synthesis

Abstract: This paper describes a novel light field compression scheme using a depth image-based view synthesis technique. A small subset of views is compressed with HEVC inter coding and then used to reconstruct the entire light field. The residual of the whole light field can be then restructured as a video sequence and encoded by HEVC inter coding. Experiments show that our scheme significantly outperforms a similar view synthesis method which utilizes convolutional neural networks, and does not require training with … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
3

Relationship

2
8

Authors

Journals

citations
Cited by 50 publications
(43 citation statements)
references
References 23 publications
0
43
0
Order By: Relevance
“…View synthesis based predictive coding has also been investigated in [24] where the authors use a linear approximation computed with Matching Pursuit View synthesis based predictive coding is another research direction followed in [24] where the authors use a linear approximation computed with Matching Pursuit for disparity based view prediction. The authors in [25] and [26] use instead a the convolutional neural network (CNN) architecture proposed in [27] for view synthesis and prediction. The prediction residue is then coded using HEVC [25], or using local residue transforms (SA-DCT) and coding [26].…”
Section: Light Field Compressionmentioning
confidence: 99%
“…View synthesis based predictive coding has also been investigated in [24] where the authors use a linear approximation computed with Matching Pursuit View synthesis based predictive coding is another research direction followed in [24] where the authors use a linear approximation computed with Matching Pursuit for disparity based view prediction. The authors in [25] and [26] use instead a the convolutional neural network (CNN) architecture proposed in [27] for view synthesis and prediction. The prediction residue is then coded using HEVC [25], or using local residue transforms (SA-DCT) and coding [26].…”
Section: Light Field Compressionmentioning
confidence: 99%
“…1) as the set of anchor light field views. Arguably, these distinct views on the extreme corners of a densely sampled light field contain all color and geometric information, from which the whole light field can be reconstructed [16]. Disparity is estimated between pairs of anchor views.…”
Section: Finetuned Flownet 20 For Stereomentioning
confidence: 99%
“…Stereo specific compression techniques exist, and range from using a Markov random field (MRF) for disparity prediction to separate transforms for residual images [40,14,48,3,34,42]. Also closely related is light field image compression, where learning-based view synthesis techniques are used to take advantage of the vast redundancy between the subaperture images [20,21].…”
Section: Multi-view Compressionmentioning
confidence: 99%