2019
DOI: 10.1109/lra.2019.2927123
|View full text |Cite
|
Sign up to set email alerts
|

Visual-Inertial Localization With Prior LiDAR Map Constraints

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 50 publications
(29 citation statements)
references
References 36 publications
0
29
0
Order By: Relevance
“…The evidence is more obvious in the composite positioning technologies. For instance, WiFi and vision [1], inertia and vision [2], Lidar and vision [3], hybrid localization technology [4]. Visual localization is a key role in their system architecture, respectively.…”
Section: A Background and Significancementioning
confidence: 99%
“…The evidence is more obvious in the composite positioning technologies. For instance, WiFi and vision [1], inertia and vision [2], Lidar and vision [3], hybrid localization technology [4]. Visual localization is a key role in their system architecture, respectively.…”
Section: A Background and Significancementioning
confidence: 99%
“…In this work, we propose an approach for real-time lightweight monocular camera localization in prior 3D LiDAR maps using direct 2D-3D geometric line correspondences. We assume a coarse pose initialization is given and focus on the pose tracking in maps, which follows the related works [4], [5]. For geometric concurrent feature extraction, 3D line segments are detected offline from LiDAR maps while robust 2D line segments are extracted online from video sequences.…”
Section: Vins-monomentioning
confidence: 99%
“…Similarly in [14], 3D structural descriptors are used for matching LiDAR maps with sparse visual reconstructed point clouds. In [8], [5], dense local point clouds are reconstructed from a stereo camera to match with the 3D LiDAR maps, and then the matching results are loosely or tightly coupled into the VO and VIO system for optimizing camera poses. These localization methods by 3D registration obtain feasible results compared with vision-only based methods.…”
Section: Related Workmentioning
confidence: 99%
“…Point cloud map [24], [25], [26], [27], [28], [29], [30], [31], [32] is a novel map source that provides dense and accurate 3D reference points. Compared with the OSM and satellite images, a point cloud map well supports the 3D localization, which makes it popular in modern vehicle localization algorithms.…”
Section: Introductionmentioning
confidence: 99%