Navigation has been a major challenge for the successful operation of an autonomous aircraft. Although success has been achieved using active methods such as radar, sonar, lidar and the global positioning system (GPS), such methods are not always suitable due to their susceptibility to jamming and outages. Vision, as a passive navigation method, is considered as an excellent alternative; however, the development of vision-based autonomous systems for outdoor environments has proven difficult. For flying systems, this is compounded by the additional challenges posed by environmental and atmospheric conditions. In this paper, we present a novel passive vision-based algorithm which is invariant to illumination, scale and rotation. We use a three stage landmark recognition algorithm and an algorithm for waypoint matching. Our algorithms have been tested in both synthetic and real-world outdoor environments demonstrating overall good performance. We further compare our feature matching method with the speed-up robust features (SURF) method with results demonstrating that our method outperforms the SURF method in feature matching as well as computational cost.