2011
DOI: 10.1109/taes.2011.5751236
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Two Image and Inertial Sensor Fusion Techniques for Navigation in Unmapped Environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(15 citation statements)
references
References 15 publications
0
15
0
Order By: Relevance
“…This approach requires a number of reference nodes (anchor nodes or landmarks) deployed at fixed locations as well as one or more mobile nodes (receiver). Another approach is to combine the information provided by Inertial Measurement Unit (IMU) and aiding sensors such as cameras (monocular, stereo and RGB-D) and LiDAR sensors (Leutenegger, 2013;Veth, 2011).…”
Section: Indoor Positioning and Mappingmentioning
confidence: 99%
See 2 more Smart Citations
“…This approach requires a number of reference nodes (anchor nodes or landmarks) deployed at fixed locations as well as one or more mobile nodes (receiver). Another approach is to combine the information provided by Inertial Measurement Unit (IMU) and aiding sensors such as cameras (monocular, stereo and RGB-D) and LiDAR sensors (Leutenegger, 2013;Veth, 2011).…”
Section: Indoor Positioning and Mappingmentioning
confidence: 99%
“…Feature extraction and matching algorithms are commonly combined, with RANSAC procedures that aim to perform outlier detection and removal using solely image observations by means of position and attitude estimation (Hartley and Schaffalitzky, 2004;Nistér, 2003). Other approaches combine derived trajectories or inertial data to predict where a point feature should appear in the second image (Veth, 2011;Taylor, 2011).…”
Section: Indoor Positioning and Mappingmentioning
confidence: 99%
See 1 more Smart Citation
“…On the one hand, imaging sensor performance has been a key enabling technology allowing for information-rich data acquisition. On the other hand, algorithmic developments, in particular, advances in computer vision, have highly automated the extraction of geometrical information, making it feasible to efficiently integrate it into navigation filters [19,20,21,22,23,24]. Broadly speaking, imaging sensors work in active and passive modes, based on whether they provide a signal to observe the object space or just sense some part of the spectrum.…”
Section: Image Based Pntmentioning
confidence: 99%
“…These approaches are usually referred as visual odometry (VO) or Structure from Motion (SfM) in the robotics and computer vision community. Alternatively, (Taylor et al, 2011) presents two strategies to use an IMU as a primary positioning sensor and to control inertial drift with visual information during filtering step. The two approaches are implemented using an Unscented Kalman Filter estimation method.…”
Section: Introductionmentioning
confidence: 99%