2019 IEEE International Conference on Real-Time Computing and Robotics (RCAR) 2019
DOI: 10.1109/rcar47638.2019.9044146
|View full text |Cite
|
Sign up to set email alerts
|

Extrinsic Calibration of 3D Range Finder and Camera without Auxiliary Object or Human Intervention

Abstract: Fusion of heterogeneous extroceptive sensors is the most effient and effective way to representing the environment precisely, as it overcomes various defects of each homogeneous sensor. The rigid transformation (aka. extrinsic parameters) of heterogeneous sensory systems should be available before precisely fusing the multisensor information. Researchers have proposed several approaches to estimating the extrinsic parameters. These approaches require either auxiliary objects, like chessboards, or extra help fr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 24 publications
0
8
0
Order By: Relevance
“…The appearance information in the surroundings, such as geometric edge alignment, can be useful to reduce such errors. Liao and Liu (2019) utilizes the line features in both the image and the point cloud to refine the calibration parameter by feature matching. A typical pipeline of a hand-eye based method is shown in Fig.…”
Section: Hand-eye Based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The appearance information in the surroundings, such as geometric edge alignment, can be useful to reduce such errors. Liao and Liu (2019) utilizes the line features in both the image and the point cloud to refine the calibration parameter by feature matching. A typical pipeline of a hand-eye based method is shown in Fig.…”
Section: Hand-eye Based Methodsmentioning
confidence: 99%
“…The camera motion transformations can also be found using a standard visual odometry approach, which estimate the motion of a camera in real time using sequential images (i.e., ego-motion). As an example, ORB-SLAM (Mur-Artal et al, 2015) is a feature-based monocular simultaneous localization and mapping (SLAM) system that is frequently mentioned (Liao andLiu, 2019, Shi et al, 2019a). Note that, the motion estimation purely based on visual estimation faces the problem of scale ambiguity and requires the use of some additional methods to estimate the scale Ishikawa et al (2018), Taylor and Nieto (2016).…”
Section: Hand-eye Based Methodsmentioning
confidence: 99%
“…In [35], the authors use road structuring in urban areas, which would limit calibration to these kinds of environments. Instead, in [36], [37], the authors use the co-registration of modeled straight lines in both spaces. In these cases, the problem is that the environment must be highly structured to contain enough geometric primitives.…”
Section: Related Workmentioning
confidence: 99%
“…The difference in rotation is measured according to the angle difference between the ground truth R gt and the resulting rotation R res , which is calculated as e r = log(R gt R −1 res ) ∨ 4 . The difference in translation is computed using vector subtraction as e t = t gt − t res 2 .…”
Section: B Experiments In Synthetic Datamentioning
confidence: 99%
“…http://pointclouds.org 2 http://eigen.tuxfamily.org 3 http://ceres-solver.org4 The operator φ = log(R) ∨ is defined to associate R in SO(3) to its rotation angle ϕ ∈ R 3 on the axis.…”
mentioning
confidence: 99%