Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. The integration of image and inertial measurements is an attractive solution to some of these problems. Among other advantages, adding inertial measurements to image-based motion estimation can reduce the sensitivity to incorrect image feature tracking and camera modeling errors. On the other hand, image measurements can be exploited to reduce the drift that results from integrating noisy inertial measurements, and allows the additional unknowns needed to interpret inertial measurements, such as the gravity direction and magnitude, to be estimated.This work has developed both batch and recursive algorithms for estimating camera motion, sparse scene structure, and other unknowns from image, gyro, and accelerometer measurements. A large suite of experiments uses these algorithms to investigate the accuracy, convergence, and sensitivity of motion from image and inertial measurements.Among other results, these experiments show that the correct sensor motion can be recovered even in some cases where estimates from image or inertial estimates alone are grossly wrong, and explore the relative advantages of image and inertial measurements and of omnidirectional images for motion estimation.To eliminate gross errors and reduce drift in motion estimates from real image sequences, this work has also developed a new robust image feature tracker that exploits the rigid scene assumption and eliminates the heuristics required by previous trackers for handling large motions, detecting mistracking, and extracting features. A proof of concept system is also presented that exploits this tracker to estimate six degree of freedom motion from long image sequences, and limits drift in the estimates by recognizing previously visited locations.