The design of augmented reality interfaces for people with mobility impairments is a novel area with great potential, as well as multiple outstanding research challenges. In this paper we present an augmented reality user interface for controlling a smart wheelchair with a head-mounted display to provide assistance for mobility restricted people. Our motivation is to reduce the cognitive requirements needed to control a smart wheelchair. A key element of our platform is the ability to control the smart wheelchair using the concepts of affordances and signifiers. In addition to the technical details of our platform, we present a baseline study by evaluating our platform through user-trials of able-bodied individuals and two different affordances: 1) Door-Go Through and 2) People-Approach. To present these affordances to the user, we evaluated fixed symbol based signifiers versus our novel dynamic signifiers in terms of ease to understand the suggested actions and its relation with the objects. Our results show a clear preference for dynamic signifiers. In addition, we show that the task load reported by participants is lower when controlling the smart wheelchair with our augmented reality user interface compared to using the joystick, which is consistent with their qualitative answers.
Assistive robots can increase the autonomy and quality of life of people with disabilities and Augmented Reality (AR) User Interfaces (UIs) have the potential to facilitate their use. In this paper, we argue that to fulfil this potential and accommodate a more diverse user base, AR UIs should proactively identify user affordances, i.e., action options that are possible in the current context. However, current AR UIs for the control of assistive robots do not readily combine atomic actions and therefore can only provide individual actions as options. To overcome these limitations, we propose Affordance-Aware Proactive Planning (AP) 2 , an algorithm that proactively identifies feasible sequences of atomic actions by leveraging large datasets of plans given using human language. (AP) 2 incorporates natural language procesing, and planning algorithms to provide the most relevant and feasible plans given the user's context and provides means to reduce the time required to generate and present these as options to the user. Our main contributions are: 1) we propose a method that allows affordance-aware AR UI for robot control to combine atomic actions and provide higher-level options to the user, 2) we provide a means for dynamically updating goal states and the amount of semantically relevant plans that are analysed to facilitate ways to improve interactivity for the user, and 3) we validate the applicability of the proposed architecture with an assistive mobile manipulator deployed in a bedroom environment and controlled using an AR UI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.