A synthetic vision system for enhancing the pilots ability to navigate and control the aircraft on the ground is described. The system uses the onboard airport database and images acquired by external sensors such as, millimeter wave, infra-red and low light TV cameras. Additional navigation information needed by the system is provided by the Inertial Navigation System and the Global Positioning System. The various functions of the system such as, image enhancement, map generation, obstacle detection, collision avoidance, guidance etc., are identified. The available technologies, some of which were developed at NASA that are applicable to the aircraft ground navigation problem, are noted. Example images of a truck crossing the runway while the aircraft flies close to the runway centerline are described. These images are from a sequence of images, acquired during one of the several flight experiments conducted by NASA to acquire data to be used for the development and verification of the synthetic vision concepts. These experiments provide a realistic database including video and infra-red images, motion states from the Inertial Navigation System and the Global Positioning System, and camera parameters.