With the development of computer vision and mobile computing, assistive navigation for people with visual impairment arouses the attention of research communities. As two key challenges of assistive navigation, ''Where am I?'' and ''What are the surroundings?'' are still to be resolved by taking advantage of visual information. In this paper, we leverage the prevailing compact network as the backbone to build a unified network featuring two branches that implement scene description and scene recognition separately. Based on the unified network, the proposed pipeline performs scene recognition and visual localization simultaneously in the scenario of assistive navigation. The visual localization pipeline involves image retrieval and sequence matching. In the experiments, different configurations of the proposed pipeline are tested on public datasets to search for the optimal parameters. Moreover, on the real-world datasets captured by the wearable assistive device, the proposed assistive navigation pipeline is proved to achieve satisfactory performance. On the challenging dataset, the top-5 precision of scene recognition is more than 80%, and the visual localization precision is over 60% under a recall of 60%. The related codes and datasets are open-source online at https://github.com/chengricky/ScenePlaceRecognition.