2016
DOI: 10.1016/j.robot.2015.09.024
|View full text |Cite
|
Sign up to set email alerts
|

A metrological characterization of the Kinect V2 time-of-flight camera

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
65
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 116 publications
(67 citation statements)
references
References 19 publications
2
65
0
Order By: Relevance
“…The Kinect V2 is built upon the ToF principle, which means that it is a range finder with active IR illumination. The IR illumination follows a light cone distribution [35], which is not uniformly distributed. Thus, less-well-illuminated areas (e.g., corners, far objects) deliver inaccurate depth measurements.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The Kinect V2 is built upon the ToF principle, which means that it is a range finder with active IR illumination. The IR illumination follows a light cone distribution [35], which is not uniformly distributed. Thus, less-well-illuminated areas (e.g., corners, far objects) deliver inaccurate depth measurements.…”
Section: Discussionmentioning
confidence: 99%
“…The stereo camera and structured light camera use parallax theory to calculate depth, while the ToF camera is based on a beam distance measurement principle, calculating the distance from the travel time of a modulated beam between sensor and object. Among the mainstream ToF camera models, including Microsoft Kinect V2 [57], CubeEye [58], SwissRanger SR4000 [58], and PMD CamCube [59], the Kinect V2 is selected as the sensor array component in this paper for its wider FOV, higher resolution, longer ranging [35], and easier accessibility.…”
Section: Rgb-d Camera Array System Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…structured light for Kinect v1 and TOF for Kinect v2, are based on IR signals. These algorithms may be affected by errors if the monitored area is characterized by a reflective surface [72], and this uncertainty in the evaluation of depth data can generate an error in the joint estimation process [73]. Even when some corrections may be required, depth data extracted from Kinect can be used to design algorithms with performance comparable to gold-standard systems, as discussed in [74].…”
Section: Referencesmentioning
confidence: 99%
“…Also, as using IR light source, they can mitigate the effect of ambient light. Among the Microsoft Kinect v2, CubeEye and PMD CamCube of the main types of TOF cameras, the Kinect v2 is selected as sensor array components for its wider FOV, higher resolution and longer ranging (Corti et al, 2016) Compared to outdoor environment, the indoor data collection with depth camera is often impeded by more fend, shorter distance and smaller space, therefore, both horizontal movement and vertical pitch are required to make data complete, which lead to higher risk of tracking lost and more data processing workload. Besides, most RGB-D reconstruct method use the visual odometry to build relatively transformation between frames, which does not work properly in no texture region or regions with repetitive textures which are quite common for indoor images (Yousif et al, 2014).…”
Section: Hardwarementioning
confidence: 99%