2007 IEEE/RSJ International Conference on Intelligent Robots and Systems 2007
DOI: 10.1109/iros.2007.4399563
|View full text |Cite
|
Sign up to set email alerts
|

Improving MAV pose estimation using visual information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2008
2008
2013
2013

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(10 citation statements)
references
References 15 publications
0
10
0
Order By: Relevance
“…Several innovative algorithms fusing GPS and MEMS observations have been shown to improve the modelling of the large stochastic drifts within MEMS IMUs and as a consequence the accuracy of orientation estimates [23][24][25]. Furthermore, the augmentation of techniques developed within the fields of photogrammetry and computer vision have contributed to improving the accuracies of MEMS-based navigation systems when used for direct georeferencing [26][27][28]. These developments suggest that a UAV system consisting of a lightweight MEMS based IMU along with GPS and visual observations can provide estimates of position and orientation with the accuracy required for mapping forest metrics using UAV-borne LiDAR.…”
Section: Introductionmentioning
confidence: 99%
“…Several innovative algorithms fusing GPS and MEMS observations have been shown to improve the modelling of the large stochastic drifts within MEMS IMUs and as a consequence the accuracy of orientation estimates [23][24][25]. Furthermore, the augmentation of techniques developed within the fields of photogrammetry and computer vision have contributed to improving the accuracies of MEMS-based navigation systems when used for direct georeferencing [26][27][28]. These developments suggest that a UAV system consisting of a lightweight MEMS based IMU along with GPS and visual observations can provide estimates of position and orientation with the accuracy required for mapping forest metrics using UAV-borne LiDAR.…”
Section: Introductionmentioning
confidence: 99%
“…Another strategy is registration of live video to previously acquired satellite imagery [3], which provides a direct measurement of a viewable object's world location. Either of these may be combined with simultaneous visual UAV pose refinement techniques such as Extended Kalman Filter (EKF) visual landmark tracking [4], [5], structure from motion [6], and homography-based pose refinement [7] for additional observation accuracy. However, terrain matching techniques assume the availability of recent prior imagery and are vulnerable to changes since this imagery was acquired.…”
Section: Related Workmentioning
confidence: 99%
“…Rather than run full visual and wheel odometry systems in parallel, we have focused on how to best exploit wheel odometry to lighten the computational burden of VO. A number of authors have used visual matching to correct IMU or GPS data on aerial platforms (Brown and Sullivan, 2002) (Andersen and Taylor, 2007) . Our system is implemented on a ground rover, where many of the assumptions afforded in the air, such as nearly coplanar features and slow visual flow, do not apply.…”
Section: Related Workmentioning
confidence: 99%