2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2017
DOI: 10.1109/cvprw.2017.218
|View full text |Cite
|
Sign up to set email alerts
|

Linearizing the Plenoptic Space

Abstract: The plenoptic function, also known as the light field or the lumigraph, contains the information about the radiance of all optical rays that go through all points in space in a scene. Since no camera can capture all this information, one of the main challenges in plenoptic imaging is light field reconstruction, which consists in interpolating the ray samples captured by the cameras to create a dense light field. Most existing methods perform this task by first attempting some kind of 3D reconstruction of the v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…The depth maps have been computed by stereo matching between the input image and an adjacent image with a disparity of approximately 10 pixels in 1000 × 1000 resolution. Linear maps (similarly to Nieto et al [23]) are also computed using stereo matching, but they encode the derivative for horizontal and vertical camera displacements. Thus, they are based on the retrieved correspondences between three images (reference, horizontal and vertical displacement).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The depth maps have been computed by stereo matching between the input image and an adjacent image with a disparity of approximately 10 pixels in 1000 × 1000 resolution. Linear maps (similarly to Nieto et al [23]) are also computed using stereo matching, but they encode the derivative for horizontal and vertical camera displacements. Thus, they are based on the retrieved correspondences between three images (reference, horizontal and vertical displacement).…”
Section: Resultsmentioning
confidence: 99%
“…If small specular reflections may remain unnoticed even if decreasing the photo-realism, reconstruction of fully refractive or reflective objects fails. Some recent advances in those techniques are more robust to this problem by taking into account the viewing direction [19,20] or the disparity of non-Lambertian features [23,22]. Other techniques focus specifically on transparent and refractive objects.…”
Section: Related Workmentioning
confidence: 99%
“…When such objects, socalled non-Lambertian, are present in the scene, the linear hypothesis in pixel displacement in the function of the camera displacement is not valid anymore. Adapting the DIBR principles to non-Lambertian objects is nevertheless possible by exploiting additional information, such as structure, normal, and indexes of refraction [28], or a more accurate approximation of the pixel displacement [29][30][31] (chosen solution in RVS).…”
Section: Frequent Artifactsmentioning
confidence: 99%
“…Alternatively, to model the non-Lambertian surface in itself, it is possible to track its feature movements on the surface [29,33,34]. DIBR can be generalized to non-Lambertian objects by replacing the usual depth maps with the coefficients of a polynomial approximating the non-Lambertian features displacement [30,31].…”
Section: Non-lambertian Casementioning
confidence: 99%
“…Applying the shearlet transform [23] on epipolar plane images (EPI) allows to render scenes with non-Lambertian objects and semi-transparency, by implicitly segmenting them [24], but does not approximate curved feature paths in the 4D light field. Indeed, non-Lambertian features are not constrained to a plane, hence, even locally, they have to be described with more complex models, using the general linear cameras approximation [25], [26], a local linear approximation [27] or global approximation with Bezier curves [28]. Multiplane Images (MPI) rendering approximates the straight lines visible in EPI by segmenting the scene in layered depth [29], [30] and seems to be resistant to artifacts created by non-Lambertian objects present in the scene [6].…”
Section: Related Workmentioning
confidence: 99%