2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016
DOI: 10.1109/cvpr.2016.101
|View full text |Cite
|
Sign up to set email alerts
|

Information-Driven Adaptive Structured-Light Scanners

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…Although, there has been work in the robotics and vision community on adaptive sensing of features in the scene relevant to a particular inference task, they do not incorporate the working principles of a particular 3D sensor in their algorithm design (Denzler and Brown, 2002;Paletta et al, 2000). In the field of structured light, Zhang et al (2014) and Rosman et al (2016) incorporate the number of projected patterns as a resource expenditure which they try to minimize while maximizing the information gain from the scene. In this article, we are interested in adaptive algorithms for LiDAR sensors that take into account physical constraints such as the power expended on far away objects or on objects moving out of the FOV.…”
Section: Adaptive Sampling In 3d Modelsmentioning
confidence: 99%
“…Although, there has been work in the robotics and vision community on adaptive sensing of features in the scene relevant to a particular inference task, they do not incorporate the working principles of a particular 3D sensor in their algorithm design (Denzler and Brown, 2002;Paletta et al, 2000). In the field of structured light, Zhang et al (2014) and Rosman et al (2016) incorporate the number of projected patterns as a resource expenditure which they try to minimize while maximizing the information gain from the scene. In this article, we are interested in adaptive algorithms for LiDAR sensors that take into account physical constraints such as the power expended on far away objects or on objects moving out of the FOV.…”
Section: Adaptive Sampling In 3d Modelsmentioning
confidence: 99%
“…There is an extensive literature on reasoning about sensor placement. In computer vision this is often addressed as part of active perception [3], [5], [34]. In robotics, such efforts are part of next-best-view planning and dynamic planning [1], [15], [41], [24], [33], [19], although in our case we estimate planner-level notions (task success) rather than lower-level ones (geometric or classification uncertainty).…”
Section: Related Workmentioning
confidence: 99%
“…Structured light illumination (SLI) refers to a method of 3D scanning that uses a projector to project a series of light striped patterns such that a camera can reconstruct depth based on the warping of the pattern over the target object's surface [22,10,16,14,19,13,28,27,31]. Examples of SLI include single pattern techniques which project a static pattern that is continuously projected and from which a 3D reconstruction can be made from a single snap shot [10,2,11,8].…”
Section: Introductionmentioning
confidence: 99%