2014
DOI: 10.1016/j.robot.2014.02.004
|View full text |Cite
|
Sign up to set email alerts
|

Error propagation and uncertainty analysis between 3D laser scanner and camera

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…Sensitivity tests for the calibration method presented in [18] were performed. It was concluded that the parameters involved in the distance between the camera and the calibration pattern (depth) are the most likely to introduce error in the final values of camera calibration, as also concluded in uncertainty analysis in [6] Now that the system behaviour is known, we can pay more attention to characterize the uncertainty in these parameters. LHS method tends to calculate more stable results, i.e.…”
Section: Discussionmentioning
confidence: 95%
See 1 more Smart Citation
“…Sensitivity tests for the calibration method presented in [18] were performed. It was concluded that the parameters involved in the distance between the camera and the calibration pattern (depth) are the most likely to introduce error in the final values of camera calibration, as also concluded in uncertainty analysis in [6] Now that the system behaviour is known, we can pay more attention to characterize the uncertainty in these parameters. LHS method tends to calculate more stable results, i.e.…”
Section: Discussionmentioning
confidence: 95%
“…Furthermore, also the parameter with greater total sensitivity is ߚ. A similar behavior but analyzed from the error propagation perspective in LiDAR-camera calibration is presented in [6]. We can define that the parameters involved directly with the distance of the calibration pattern and image distortions tend to be the most relevant in the error propagation in our calibration system.…”
mentioning
confidence: 91%
“…There are also some methods that automatically extract feature points, such as vertices of a polygonal planar checkerboard, from the LiDAR data. Nevertheless, these approaches require either manual operation for feature points selection from the image [10] or customized checkerboard for feature points generation in the point cloud [11]. Geiger et al proposed an automatic method for extrinsic calibration with one shot of multiple checkerboards in [23].…”
Section: Multiple Geometry Elementsmentioning
confidence: 99%
“…For target-based calibration, the conventional method involves finding the vertices of a polygonal board, which can be a chessboard or a triangular board, both in the point cloud obtained by the LiDAR and the image captured by the camera either manually or automatically [9,10,11]. The vertices are estimated by constructing the convex hull of the extracted board's point cloud.…”
Section: Introductionmentioning
confidence: 99%
“…In some particular applications, the combination of multiple vision systems can synthesize the advantages of the member measurement devices and achieve comprehensive measurement results [8,[12][13][14][15][16][17][18]. However, more errors and measurement uncertainty are introduced to such combined systems that affect the overall 3D reconstruction accuracy [19]. Hence, it is important to evaluate the measurement error and uncertainty of the combined visual system in order to perform error compensation and improve overall measurement accuracy.…”
Section: Introductionmentioning
confidence: 99%