This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR multimodal interface architecture and multimodal fusion strategies that are based on the combination of time-based and domain semantics. Then, we present the results from a user study comparing using multimodal input to using gesture input alone. The results show that a combination of speech and paddle gestures improves the efficiency of user interaction. Finally, we describe some design recommendations for developing other multimodal AR interfaces.
This paper describes an augmented reality (AR) multimodal interface that uses speech and paddle gestures for interaction. The application allows users to intuitively arrange virtual furniture in a virtual room using a combination of speech and gestures from a real paddle. Unlike other multimodal AR applications, the multimodal fusion is based on the combination of time-based and semantic techniques to disambiguate a users speech and gesture input. We describe our AR multimodal interface architecture and discuss how the multimodal inputs are semantically integrated into a single interpretation by considering the input time stamps, the object properties, and the user context.
Recent advanced interface technologies allow the user to interact with different spaces such as Virtual Reality (VR), Augmented Reality (AR) and Ubiquitous Computing (UC) spaces. Previously, human computer interaction (HCI) issues in VR, AR and UC have been largely carried out in separate communities. Here, we combine these three interaction spaces into a single interaction space, called Tangible Space. We propose the VARU framework which is designed for rapid prototyping of a tangible space application. It is designed to provide extensibility, flexibility and scalability. Depending on the available resources, the user could interact with either the virtual, physical or mixed environment. By having the VR, AR and UC spaces in a single platform, it gives us the possibility to explore different types of collaboration across the different spaces. As a result, we present our prototype application which is built using the VARU framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.