In this paper, we explore a new way to provide context-aware assistance for indoor navigation using a wearable vision system. We investigate how to represent the cognitive knowledge of wayfinding based on first-person-view videos in real-time and how to provide context-aware navigation instructions in a human-like manner. Inspired by the human cognitive process of wayfinding, we propose a novel cognitive model that represents visual concepts as a hierarchical structure. It facilitates efficient and robust localization based on cognitive visual concepts. Next, we design a prototype system that provides intelligent contextaware assistance based on the cognitive indoor navigation knowledge model. We conduct field tests to evaluate the system's efficacy by benchmarking it against traditional 2D maps and human guidance. The results show that contextawareness built on cognitive visual perception enables the system to emulate the efficacy of a human guide, leading to positive user experience.
Inspired by progresses in cognitive science, artificial intelligence, computer vision, and mobile computing technologies, we propose and implement a wearable virtual usher for cognitive indoor navigation based on egocentric visual perception. A novel computational framework of cognitive wayfinding in an indoor environment is proposed, which contains a context model, a route model, and a process model. A hierarchical structure is proposed to represent the cognitive context knowledge of indoor scenes. Given a start position and a destination, a Bayesian network model is proposed to represent the navigation route derived from the context model. A novel dynamic Bayesian network (DBN) model is proposed to accommodate the dynamic process of navigation based on real-time first-person-view visual input, which involves multiple asynchronous temporal dependencies. To adapt to large variations in travel time through trip segments, we propose an online adaptation algorithm for the DBN model, leading to a self-adaptive DBN. A prototype system is built and tested for technical performance and user experience. The quantitative evaluation shows that our method achieves over 13% improvement in accuracy as compared to baseline approaches based on hidden Markov model. In the user study, our system guides the participants to their destinations, emulating a human usher in multiple aspects.
We present SocioGlass-a system built on Google Glass paired with a mobile phone that provides a user with in-situ information about an acquaintance in face-to-face communication. The system can recognize faces from the live feed of visual input. Accordingly, it retrieves relevant information about a person with a matching face in the database. In order to provide interaction assistance, multiple aspects of personal information are categorized based on its relevance to the interaction scenario or context. Thus, the system can be adapted to the social context in interaction assistance. The system can be used to help acquaintances build relationships, or to assist people with memory problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.