This paper describes a new approach for a vision-based positioning system for unmanned aerial vehicles. The proposed method is matching landmarks found in an aerial image to a reference database, and it uses the match data to estimate the current position of the vehicle. Previous research in the area has generally focused on matching raw aerial image data to a set of reference images. Although these methods can be designed to provide acceptable results in specific scenarios, they struggle with variations in lighting, seasonal changes, and changing environments. We present a new multistage method that aims to overcome these challenges by analyzing the image for key features that can be matched to known ground objects. The new approach can be divided into a set of subproblems: detection, fingerprinting, matching, and state estimation. Computational analysis shows that the system can accurately detect and match landmarks to a prior database with 70% accuracy when operating with real landmark databases, high distortion levels, and viewing angles, thus enabling the system to determine the position of the vehicle. The proposed system is more flexible than current methods as the core methods (descriptor and matcher) are independent of the sensor used for detection and can be used with any type of landmark. It is therefore expected that the system can be applied to a wide range of problems, from pure vision-based navigation for small unmanned aerial vehicles to planetary exploration, as long as a sensor capable of registering relevant landmarks is available.