2018
DOI: 10.3390/s18093122
|View full text |Cite
|
Sign up to set email alerts
|

A Versatile Method for Depth Data Error Estimation in RGB-D Sensors

Abstract: We propose a versatile method for estimating the RMS error of depth data provided by generic 3D sensors with the capability of generating RGB and depth (D) data of the scene, i.e., the ones based on techniques such as structured light, time of flight and stereo. A common checkerboard is used, the corners are detected and two point clouds are created, one with the real coordinates of the pattern corners and one with the corner coordinates given by the device. After a registration of these two clouds, the RMS er… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 44 publications
0
15
0
Order By: Relevance
“…All experiments were run on an nVidia Jetson Xavier. The ArUco library [ 19 ] was employed for marker detection in the video sequences recorded with a calibrated Stereolabs ZED camera [ 70 , 71 ] mounted on top of a Pioneer 3AT robot, which is shown in Figure 13 . All sensors and devices were calibrated and shared the same coordinate system.…”
Section: Methodsmentioning
confidence: 99%
“…All experiments were run on an nVidia Jetson Xavier. The ArUco library [ 19 ] was employed for marker detection in the video sequences recorded with a calibrated Stereolabs ZED camera [ 70 , 71 ] mounted on top of a Pioneer 3AT robot, which is shown in Figure 13 . All sensors and devices were calibrated and shared the same coordinate system.…”
Section: Methodsmentioning
confidence: 99%
“…Our approach is devised for situations that are similar to the works of Stateczny et al [35,36] where the key aspect of autonomous navigation is the need to avoid collisions with other objects, including shore structures. In this case, we are also developing a Vision System based on the ZED camera that works well for distances closer than 20 m [37,38], besides using a LIDAR. So the N-Boat is able to automatically detect obstacles and perform suitable maneuvers in a similar manner as pointed in the above references.…”
Section: Discussionmentioning
confidence: 99%
“…The accuracy is thoroughly tested in 11,12 . Furthermore, in 10 , the ZED camera was found to be more accurate over 3 meters than structured light based sensors like Kinect v1 and v2.…”
Section: Methodsmentioning
confidence: 98%
“…Time-of-Flight (ToF) cameras, like the Kinect v2, were also discarded because of their poor performance in outdoor environments, as stated in 9 . Another key point for the camera choice was the work in 10 , where a detailed survey of Depth Data Error Estimation was carried out in the three most commonly used RGB-D Sensors: Zed Camera, Kinect v1 and Kinect v2. As the paper concludes (section 5.3), for distances greater than 3.5 m, the ZED camera obtains data with the lowest depth RMS error, which makes this depth sensor perfect for our dataset.…”
Section: Methodsmentioning
confidence: 99%