This paper presents a multimodal interaction framework for semantic 3D object manipulation in the virtual reality. In our framework, interaction devices such as keyboard, mouse, joystick, tracker, can be combined with speech utterance to give a command to the system. We define an object ontology based on common sense knowledge which defines relationships between virtual objects. By taking into account the current user context and the object ontology, semantic integration component integrates the interpretation result from input manager, and then sends the result to the interaction manager. That result will be mapped into a proper object manipulation. Thus, the system can understand the user intention and assist him for achieving his goal in the handling process, instead of relying entirely on the user's control upon the interaction device and the object, avoiding nonsensical manipulations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.