2019
DOI: 10.1002/cav.1879
|View full text |Cite
|
Sign up to set email alerts
|

Scale‐aware camera localization in 3D LiDAR maps with a monocular visual odometry

Abstract: Localization information is essential for mobile robot systems in navigation tasks. Many visual‐based approaches focus on localizing a robot within prior maps acquired with cameras. It is critical where the Global Positioning System signal is unreliable. In contrast to conventional methods that localize a camera in an image‐based map, we propose a novel approach that localizes a monocular camera within a given three‐dimensional (3D) light detection and ranging (LiDAR) map. We employ visual odometry to reconstr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 31 publications
0
1
0
Order By: Relevance
“…For precisely tracking, it compared RGB images and a synthesized depth image projected from a LiDAR map. Our previous work 25 tracked a 6-DoF pose of a monocular camera within a LiDAR map by matching sparse camera point clouds acquired from direct sparse odometry (DSO-SLAM) with an a priori LiDAR map to find corresponding points. This work is also in need of a coarse estimation of initial position as a prior information.…”
Section: Related Workmentioning
confidence: 99%
“…For precisely tracking, it compared RGB images and a synthesized depth image projected from a LiDAR map. Our previous work 25 tracked a 6-DoF pose of a monocular camera within a LiDAR map by matching sparse camera point clouds acquired from direct sparse odometry (DSO-SLAM) with an a priori LiDAR map to find corresponding points. This work is also in need of a coarse estimation of initial position as a prior information.…”
Section: Related Workmentioning
confidence: 99%