Due to the influence of weather, light, season and angle changes, the appearance of objects changes visually, which makes it difficult for unmanned vehicles that rely on visual positioning to complete their positioning work. This paper proposes a coordinated positioning method that is composed of semantic information and geometric relationships distribution (GRD), which improves the robustness of unmanned vehicle location under the above conditions. First, we improved the FAST-SCNN semantic segmentation network and replaced its fully connected layer with the conv4-3 module to prevent the spatial information of the image from being lost in the fully connected layer. At the same time, the conv4-3 layer contains the richest semantic information, we use image semantic content to create a dense and prominent scene description. These prominent descriptions were learned from a large data set of perceptual changes. The method can accurately segment geometrically stable image regions. We combine the characteristics of these highlighted areas with the existing overall representation to produce a more robust scene descriptor. Second, a method is designed to integrate the matching of semantic labels and geometric distribution relations, which is a new closed loop location recognition label and landmark map. The geometric pair relationship between the ground marks is encoded as a continuous probability density function, the GRD function, which is expressed by a Laguerre polynomial and Fourier series basis expansion. This orthogonal basis representation allows for efficient computation of rotation and translation invariants, which are used to compare signatures and search for potential loop closure candidates. Finally, we evaluate our method with some of the most advanced algorithms, such as OpenSeqSLAM, AlexNet and VSO, to demonstrate its advantages. The experimental results for representative data sets show that the method based on Fast-SCNN is superior to other methods.