2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00823
|View full text |Cite
|
Sign up to set email alerts
|

LiFF: Light Field Features in Scale and Depth

Abstract: Feature detectors and descriptors are key low-level vision tools that many higher-level tasks build on. Unfortunately these fail in the presence of challenging light transport effects including partial occlusion, low contrast, and reflective or refractive surfaces. Building on spatio-angular imaging modalities offered by emerging light field cameras, we introduce a new and computationally efficient 4D light field feature detector and descriptor: LiFF. LiFF is scale invariant and utilizes the full 4D light fiel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
50
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 65 publications
(50 citation statements)
references
References 52 publications
0
50
0
Order By: Relevance
“…It is also possible to combine these techniques to meet some specific requirements for spatial and angular density. For instance, in [55], a gantry structure with a lenslet camera is used for capturing LFs with a baseline varying between micrometers (inside the lenslet camera) to meters (by varying the camera poses). Additionally, in [12], a one dimensional (1D) array of GoPro cameras in a vertical arc are placed in a horizontally rotating gantry structure to capture dense LFs with 360 • FoV.…”
Section: A Lf Acquisitionmentioning
confidence: 99%
“…It is also possible to combine these techniques to meet some specific requirements for spatial and angular density. For instance, in [55], a gantry structure with a lenslet camera is used for capturing LFs with a baseline varying between micrometers (inside the lenslet camera) to meters (by varying the camera poses). Additionally, in [12], a one dimensional (1D) array of GoPro cameras in a vertical arc are placed in a horizontally rotating gantry structure to capture dense LFs with 360 • FoV.…”
Section: A Lf Acquisitionmentioning
confidence: 99%
“…To the best of our knowledge, we are the first work to bind stereo and light-field for large-scale depth range estimation, although some works use more than one light-field camera for other purposes. For example, Dansereau et al [12] directly extract feature information on light-field based 4D model using 3 light-field cameras. Wang et al [32] attached an additional standard camera to a LF camera for LF video using a learning-based approach.…”
Section: Related Workmentioning
confidence: 99%
“…The difference form ESPCN is using 3D LF data as input of network (The input size is W × H × MN, where W and H are width and height of sub-aperture view in LF, and M, N are the number of two angular dimensions). As shown in 5, we implement an experiment to show the superiority of M-ESPCN, notice to be fair, the training data for both ESPCN and M-ESPCN comes from same dataset [12]. Using M-ESPCN, complementary information in sub-aperture views of LF is used in the EPI domain, which leads to less noise and better performance than naive up-sample methods like bicubic or up-sample sub-aperture views independently using ESPCN.…”
Section: Hardware and Data Preprocessingmentioning
confidence: 99%
“…Also some studies focused on using the technology in space applications [60,61]. The light field features in scale space and depth proposed in [62] can also be included in this category of algorithms.…”
Section: Introductionmentioning
confidence: 99%