2014 IEEE International Conference on Robotics and Automation (ICRA) 2014
DOI: 10.1109/icra.2014.6906964
|View full text |Cite
|
Sign up to set email alerts
|

Micro air vehicle localization and position tracking from textured 3D cadastral models

Abstract: In this paper, we address the problem of localizing a camera-equipped Micro Aerial Vehicle (MAV) flying in urban streets at low altitudes. An appearance-based global positioning system to localize MAVs with respect to the surrounding buildings is introduced. We rely on an air-ground image matching algorithm to search the airborne image of the MAV within a ground-level Street View image database and to detect image matching points. Based on the image matching points, we infer the global position of the MAV by b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
7
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…In fact, data from different algorithms might be combined to derive a more accurate estimate of the vehicle’s pose, as their uncertainties might be complementary. Illustrated in [ 27 ] is the fusion of visual odometry, used to track the position of a camera-equipped Micro Aerial Vehicle (MAV) flying in urban streets with an air-ground image matching algorithm using a cadastral 3D city model by means of a Kalman filter. Additionally, our approach is based on the fusion of VO and the appearance-based place recognition algorithm SeqSLAM.…”
Section: Related Workmentioning
confidence: 99%
“…In fact, data from different algorithms might be combined to derive a more accurate estimate of the vehicle’s pose, as their uncertainties might be complementary. Illustrated in [ 27 ] is the fusion of visual odometry, used to track the position of a camera-equipped Micro Aerial Vehicle (MAV) flying in urban streets with an air-ground image matching algorithm using a cadastral 3D city model by means of a Kalman filter. Additionally, our approach is based on the fusion of VO and the appearance-based place recognition algorithm SeqSLAM.…”
Section: Related Workmentioning
confidence: 99%
“…In previous works, data association for city-scale SLAM, as far as we know, has been either carried out via simple geometric criteria [6][7][8], or left to outlierelimination processes such as RANSAC (e.g. in [14]). In [6][7][8], a point p is associated to a building if the ray cast from the center of the camera that goes through p intersects a building plane.…”
Section: Related Workmentioning
confidence: 99%
“…Although the time complexity of such an evaluation remains negligible, its results naturally tend to contain a considerable amount of noise. In [14], Google street view images with known poses are first back-projected on 3d building models in an off-line pre-processing step. In other terms, each pixel in the street view image is associated to a 3d position.…”
Section: Related Workmentioning
confidence: 99%
“…We, on the other hand, compute a full 6DOF metrical localization on the panoramic images. In [17], they extended that work by adding 3D models of buildings as input to improve localization. Other researchers have matched Street View panoramas by matching descriptors computed directly on it [25].…”
Section: Related Workmentioning
confidence: 99%