2020
DOI: 10.1109/tip.2019.2928873
|View full text |Cite
|
Sign up to set email alerts
|

Geometry-Aware Graph Transforms for Light Field Compact Representation

Abstract: The paper addresses the problem of energy compaction of dense 4D light fields by designing geometry-aware local graph-based transforms. Local graphs are constructed on super-rays that can be seen as a grouping of spatially and geometry-dependent angularly correlated pixels. Both non separable and separable transforms are considered. Despite the local support of limited size defined by the super-rays, the Laplacian matrix of the non separable graph remains of high dimension and its diagonalization to compute th… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(34 citation statements)
references
References 40 publications
0
34
0
Order By: Relevance
“…Since the solution is outperformed by HEVC for low bit rates, the authors suggest using the solution for applications such as archiving and instant storage on lenslet cameras. In [155], a GFT solution with a support defined on super-ray segmentation is proposed for lenslet LF coding. As proposed in [156], a super-ray segmentation is used for grouping light rays of similar color values and being close spatially in the 3D space by taking into account the disparity information.…”
Section: ) Gft-based Codingmentioning
confidence: 99%
See 1 more Smart Citation
“…Since the solution is outperformed by HEVC for low bit rates, the authors suggest using the solution for applications such as archiving and instant storage on lenslet cameras. In [155], a GFT solution with a support defined on super-ray segmentation is proposed for lenslet LF coding. As proposed in [156], a super-ray segmentation is used for grouping light rays of similar color values and being close spatially in the 3D space by taking into account the disparity information.…”
Section: ) Gft-based Codingmentioning
confidence: 99%
“…Therefore, as the encoder proceeds towards the higher hierarchical layers, the density of the reference views increases and the intermediate view prediction becomes more efficient [248]. Experimental results are shown for coding LF images from the EPFL, HCI 4D and Fraunhofer LF Datasets (see Table 1) and comparing the proposed solution to eight different LF coding solutions: i) the WaSP solution [158] as in JPEG Pleno VM 2.1 [127]; ii) the DCT-based solution MuLE [125] as in the JPEG Pleno VM 2.1 [127]; iii) the lifting DTW-based solution in [148]; iv) the GFT-based solution in [155]; v) a PVS-based solution using serpentine ordering (see Fig. 9b) and HEVC; vi) the solution in [249] using transform-assisted view synthesis (see Section III-D2); vii) the solution in [250] using learningbased synthesis (see Section III-D3); and viii) the solution in [49] also using learning-based synthesis.…”
Section: ) Dibr-based View Synthesismentioning
confidence: 99%
“…To be well suited for local separable graph transforms, the super-rays need to be consistent across the views. We hence consider the approach described in [8] to first compute super-pixels in the top-left view, and then project, using the top left view disparity, its segmentation labels to all the other views. The occluded pixels are assigned the label of the neighboring super-ray corresponding to the foreground objects (i.e.…”
Section: Local Supportmentioning
confidence: 99%
“…Therefore, the basis functions of each spatial graph transform U k,v are not the same in the different views v, resulting in decreased correlation of the spatial transform coefficientsx k,v across views. To circumvent this issue, we have proposed in [8], a basis optimization procedure to find the basis functions U k,v that approximately diagonalize their respective Laplacians L k,v while being coherent across the views, given the scene geometry. By coherence, we mean the similarity of the eigenvectors in different views along the epipolar lines of the light field.…”
Section: Spatial Graph Transformsmentioning
confidence: 99%
See 1 more Smart Citation