2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022
DOI: 10.1109/iros47612.2022.9982195
|View full text |Cite
|
Sign up to set email alerts
|

Continuous Self-Localization on Aerial Images Using Visual and Lidar Sensors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(17 citation statements)
references
References 43 publications
0
17
0
Order By: Relevance
“…Chiu et al (2018) proposed a new approach that uses semantic information to register 2D monocular video frames to LiDAR data for augmented reality driving applications. Fervers et al (2022) combined the feature extracted in ground image and LiDAR point cloud to find the registration relationship. Some researchers believe that compared with 3D point clouds, 2D orthophoto or other map is suitable for geographical reference object because it is easier to obtain.…”
Section: Image Geo-registrationmentioning
confidence: 99%
“…Chiu et al (2018) proposed a new approach that uses semantic information to register 2D monocular video frames to LiDAR data for augmented reality driving applications. Fervers et al (2022) combined the feature extracted in ground image and LiDAR point cloud to find the registration relationship. Some researchers believe that compared with 3D point clouds, 2D orthophoto or other map is suitable for geographical reference object because it is easier to obtain.…”
Section: Image Geo-registrationmentioning
confidence: 99%
“…Hybrid sensor solutions have also been explored, such as in [16] where an aerial robot achieves global localization through the use of egocentric 3D semantically labelled LiDAR, IMU, and visual information. CSLA [6] and SIBCL [33] extract visual features from ground and satellite images and use LiDAR points to establish correspondence between the two views. CSLA [6] aims to estimate 2-DoF translation, while SIBCL [33] aims to estimate 3-DoF pose, including an additional orientation.…”
Section: Related Workmentioning
confidence: 99%
“…CSLA [6] and SIBCL [33] extract visual features from ground and satellite images and use LiDAR points to establish correspondence between the two views. CSLA [6] aims to estimate 2-DoF translation, while SIBCL [33] aims to estimate 3-DoF pose, including an additional orientation. All these methods critically rely on depth information to build the correspondence across the two views.…”
Section: Related Workmentioning
confidence: 99%
“…HD maps may thus be provided by third-party mapping companies [26] as well as publicly available data, e.g. from aerial imagery [27].…”
Section: Introductionmentioning
confidence: 99%
“…Especially for cross-modality localization, the identification of reliable landmarks for various sensor and map modalities is non-trivial. Here, learningbased approaches have become the state-of-the-art for both cross-modal PR [1], [11], [34], [35], [36] as well as local pose tracking, achieving localization accuracies below 1 m for various sensor modalities, including radar-to-lidar [2], [37], [38], range-to-aerial-imagery [35], [39], [40], [41] and camera-to-aerial-imagery, also called cross-view geolocalization (CVGL) [27], [41], [42].…”
Section: Introductionmentioning
confidence: 99%