DILL, EVAN T., M.S., March 2011, Electrical Engineering Integration of 3D and 2D Imaging Data for Assured Navigation in Unknown Environments Director of Thesis: Maarten Uijt de Haag As technology advances in the area of mobile vehicles, the need for precise reliable navigation increases with it. Whether the vehicle is an unmanned aerial vehicle (UAV), a manned aerial vehicle (MAV), or an intelligent ground vehicle (IGV), there is a constant need for precise navigation capabilities. This need spurred the invention and development of many navigation systems including the most useful system to date, the Global Positioning System (GPS). GPS is a powerful tool that can reliably give meter level accuracy on a world-wide scale. With this capability, GPS is the answer to a significant number of navigation problems, but it is not the answer to them all. Since GPS relies on exterior signals from orbiting satellites, tasks such as underground navigation and navigation in dense foliage can be difficult due to signal strength attenuation as it passes through these media. GPS is also very susceptible to multipath at the receiver. If the receiver is operating in a building or in a heavy urban environment, the multipath created can degrade the received signal to the point of losing its true capabilities. Lastly, GPS capabilities are ideal for military applications. However, any system that uses exterior signals for military applications must deal with the possibility of interference, jamming, or even an attack on the system in a wartime scenario.
4Although, the list of scenarios in which GPS is not a viable answer is small, it is important that those scenarios be addressed. One viable possibility is developing a new system that complements GPS by having functionality in scenarios in which GPS is a poor option or not an option at all. This thesis describes and discusses one such possibility that could complement GPS. The proposed system is a self contained system that would use multiple sensors and the environment around them for navigation. This method would integrate three-dimensional (3D) point cloud data, two-dimensional (2D) gray-level (intensity) data, 2D digital image data, and data from an Inertial Measurement Unit (IMU). Using the 2D and 3D image frames, key features can be extracted from the surroundings such as planar surfaces, lines, and corners. Positions of the features can be calculated with respect to the sensors observing them. With these features, consecutive observations in the 3D and 2D image frames can be used to compute and estimate position and orientation change of the sensors. The use of 3D image features for positioning suffers from limited feature observability, which results in deteriorated position accuracies. Also, the 2D imagery suffers from an unknown depth when estimating the position from consecutive image frames. It is expected that the integration of these two data sets will alleviate the problems with the individual methods, resulting in a position and attitude determination method with a h...