In this paper we address the problems of virtual object interaction and user tracking in a
The MagicBook is a Mixed Reality interface that uses a real book to seamlessly transport users between Reality and Virtuality. A vision-based tracking method is used to overlay virtual models on real book pages, creating an Augmented Reality (AR) scene. When users see an AR scene they are interested in they can fly inside it and experience it as an immersive Virtual Reality (VR). The interface also supports multi-scale collaboration, allowing multiple users to experience the same virtual en vironment either from an egocentric or an exocentric perspective. In this paper we describe the MagicBook prototype, potential applications and user response.
Three-dimensional user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of 3-D interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3-D tasks and the use of traditional 2-D interaction styles in 3-D environments. We divide most userinteraction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques but also practical guidelines for 3-D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3-D interaction design and some example applications with complex 3-D interaction requirements. We also present an annotated online bibliography as a reference companion to this article. IntroductionUser interfaces (UIs) for computer applications are becoming more diverse. Mice, keyboards, windows, menus, and icons-the standard parts of traditional WIMP interfaces-are still prevalent, but nontraditional devices and interface components are proliferating rapidly. These include spatial input devices such as trackers, 3-D pointing devices, and whole-hand devices allowing gestural input. Three-dimensional, multisensory output technologies-such as stereoscopic projection displays, head-mounted displays (HMDs), spatial audio systems, and haptic devices-are also becoming more common.With this new technology, new problems have also been revealed. People often find it inherently difficult to understand 3-D spaces and to perform actions in free space (Herndon, van Dam, & Gleicher, 1994). Although we live and act in a 3-D world, the physical world contains many more cues for understanding and constraints and affordances for action that cannot currently be represented accurately in a computer simulation. Therefore, great care must go into the design of user interfaces and interaction techniques for 3-D applications. It is clear that simply adapting traditional WIMP interaction styles to three dimensions does not provide a complete solution to this problem. Rather, novel 3-D user interfaces, based on real-world interaction or some other metaphor, must be developed. This paper is a broad overview of the current state of the art in 3-D user interfaces and interaction. It summarizes some of the major components of tutorials and courses given by the authors at various conferences, including the 1999 Symposium on Virtual Reality Software and Technology. Our goals are
The acceptance of virtual environment (VE) technology requires scrupulous optimization of the most basic interactions in order to maximize user performance and provide efficient and enjoyable virtual interfaces. Motivated by insufficient understanding of the human factors design implications of interaction techniques and tools for virtual interfaces, this paper presents results of a formal study that compared two basic interaction metaphors for egocentric direct manipulation in VEs, virtual hand and virtual pointer, in object selection and positioning experiments. The goals of the study were to explore immersive direct manipulation interfaces, compare performance characteristics of interaction techniques based on the metaphors of interest, understand their relative strengths and weaknesses, and derive design guidelines for practical development of VE applications.
This paper presents Soli, a new, robust, high-resolution, low-power, miniature gesture sensing technology for human-computer interaction based on millimeter-wave radar. We describe a new approach to developing a radar-based sensor optimized for human-computer interaction, building the sensor architecture from the ground up with the inclusion of radar design principles, high temporal resolution gesture tracking, a hardware abstraction layer (HAL), a solidstate radar chip and system architecture, interaction models and gesture vocabularies, and gesture recognition. We demonstrate that Soli can be used for robust gesture recognition and can track gestures with sub-millimeter accuracy, running at over 10,000 frames per second on embedded hardware.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.