In this paper we introduce the first spatial-temporal-textual framework that can facilitate the validation and optimization of indoor navigation systems using trial participants continuously recorded feedback. The proposed framework enables us to pinpoint specific areas of improvement such as which part of the user interface requires changes, what areas in the environment necessitate modifications in the localization algorithm, and/or which instructions should be improved. Conventional evaluation of such systems is based on collecting users' feedback after the trials in the form of interviews and/or questionnaires. This form of evaluation while necessary and important, it provides a summary of the users' view of the system. In contrast, the proposed framework provides a significant improvement by refining the user feedback resolution by continuously collecting the following information in a spatial-temporal context during the trials: user comments, user interface interactions, and navigation instructions. The framework includes the following four main components: 1) trial data preprocessing; 2) comments textual analysis; 3) spatial/temporal/instructions analysis, and 4) results visualization. We introduce a case study that illustrates the use of the framework using PERCEPT indoor navigation system for blind and visually impaired users. INDEX TERMS Blind and visually impaired, wayfinding, natural language processing, machine learning, PERCEPT, navigation, comments analysis, spatial and temporal analysis, geo-map visualization.