2010
DOI: 10.2514/1.48134
|View full text |Cite
|
Sign up to set email alerts
|

Navigation Aiding Based on Coupled Online Mosaicking and Camera Scanning

Abstract: This paper presents a new method for vision-aided navigation of airborne platforms. The method is based on online mosaicking using images acquired by an on-board gimballed camera, which scans ground regions in the vicinity of the flight trajectory. The coupling of the scanning and mosaicking processes improves image-based motion estimation when operating in challenging scenarios such as narrow field-of-view cameras observing low-texture scenes. These improved motion estimations are fused with an inertial navig… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2011
2011
2017
2017

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 40 publications
0
5
0
Order By: Relevance
“…In Soatto et al, 9 epipolar geometry is employed in conjunction with a statistical motion model, while in Prazenica et al, 10 epipolar constraints are fused with a dynamical model of an airplane. In Indelman et al, 11,12 the constraints between the current and previous image are defined using epipolar geometry and combined with IMU measurements in an extended Kalman filter (EKF). With two-view based methods for aiding navigation, however, it is only possible to determine camera rotations and up-to-scale translations 13 (translations during the intervals are associated with an unknown scale).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In Soatto et al, 9 epipolar geometry is employed in conjunction with a statistical motion model, while in Prazenica et al, 10 epipolar constraints are fused with a dynamical model of an airplane. In Indelman et al, 11,12 the constraints between the current and previous image are defined using epipolar geometry and combined with IMU measurements in an extended Kalman filter (EKF). With two-view based methods for aiding navigation, however, it is only possible to determine camera rotations and up-to-scale translations 13 (translations during the intervals are associated with an unknown scale).…”
Section: Related Workmentioning
confidence: 99%
“…Assuming the camera calibration matrix is known, then the camera projection matrices can be defined as follows 11 where A=[a1,a2,a3]=Cc1 c2 is the rotation matrix from camera c 1 to camera c 2 , B=[b1,b2,b3]=Cc1 c3 is the rotation matrix from camera c 1 to camera c 3 , a4=T12 c2 is the translation expressed in camera c 2 from c 1 camera to camera c 2 , and b4=T13 c3 is the translation expressed in camera c 3 from camera c 1 to camera c 3 .…”
Section: Estimator Descriptionmentioning
confidence: 99%
See 1 more Smart Citation
“…The first 9 components of X are given in the LLLN coordinates, while the last 6 are represented in the body-fixed reference frame. Then the continuous-time system process model is given by [2], [3]…”
Section: A Process Modelmentioning
confidence: 99%
“…Considering two overlapping images, it is only possible to determine camera rotation and up-to-scale translation [1]. Therefore, two-view based methods for navigation aiding [2]- [4] are incapable of eliminating the developing navigation errors in all states.…”
Section: Introductionmentioning
confidence: 99%