Three-dimensional user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of 3-D interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3-D tasks and the use of traditional 2-D interaction styles in 3-D environments. We divide most userinteraction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques but also practical guidelines for 3-D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3-D interaction design and some example applications with complex 3-D interaction requirements. We also present an annotated online bibliography as a reference companion to this article. IntroductionUser interfaces (UIs) for computer applications are becoming more diverse. Mice, keyboards, windows, menus, and icons-the standard parts of traditional WIMP interfaces-are still prevalent, but nontraditional devices and interface components are proliferating rapidly. These include spatial input devices such as trackers, 3-D pointing devices, and whole-hand devices allowing gestural input. Three-dimensional, multisensory output technologies-such as stereoscopic projection displays, head-mounted displays (HMDs), spatial audio systems, and haptic devices-are also becoming more common.With this new technology, new problems have also been revealed. People often find it inherently difficult to understand 3-D spaces and to perform actions in free space (Herndon, van Dam, & Gleicher, 1994). Although we live and act in a 3-D world, the physical world contains many more cues for understanding and constraints and affordances for action that cannot currently be represented accurately in a computer simulation. Therefore, great care must go into the design of user interfaces and interaction techniques for 3-D applications. It is clear that simply adapting traditional WIMP interaction styles to three dimensions does not provide a complete solution to this problem. Rather, novel 3-D user interfaces, based on real-world interaction or some other metaphor, must be developed. This paper is a broad overview of the current state of the art in 3-D user interfaces and interaction. It summarizes some of the major components of tutorials and courses given by the authors at various conferences, including the 1999 Symposium on Virtual Reality Software and Technology. Our goals are
a) Small foveal region with (r 0 = 5 , r 1 = 10 , p min = 0.01) (b) Medium foveal region with (r 0 = 10 , r 1 = 20 , p min = 0.05) (c) Full Renderer Figure 1: Images generated by using our foveated renderer showing the effect of different configurations for the foveal region, including an image that was rendered by ray tracing every pixel. AbstractHead-mounted displays with dense pixel arrays used for virtual reality applications require high frame rates and low latency rendering. This forms a challenging use case for any rendering approach. In addition to its ability of generating realistic images, ray tracing offers a number of distinct advantages, but has been held back mainly by its performance. In this paper, we present an approach that significantly improves image generation performance of ray tracing. This is done by combining foveated rendering based on eye tracking with reprojection rendering using previous frames in order to drastically reduce the number of new image samples per frame. To reproject samples a coarse geometry is reconstructed from a G-Buffer. Possible errors introduced by this reprojection as well as parts that are critical to the perception are scheduled for resampling. Additionally, a coarse color buffer is used to provide an initial image, refined smoothly by more samples were needed. Evaluations and user tests show that our method achieves real-time frame rates, while visual differences compared to fully rendered images are hardly perceivable. As a result, we can ray trace non-trivial static scenes for the Oculus DK2 HMD at 1182 ⇥ 1464 per eye within the the VSync limits without perceived visual differences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.