We propose a conceptual extension of the standard triangle-based graphics pipeline by an additional intersection stage. The corresponding intersection program performs ray-object intersection tests for each fragment of an object's bounding volume. The resulting hit fragments are transfered to the fragment shading stage for computing the illumination and performing further fragment operations. Our approach combines the efficiency of the standard hardware graphics pipeline with the advantages of ray casting such as pixel accurate rendering and exact normals as well as early ray termination.This concept serves as a framework for the implementation of an interactive ray casting system for trimmed NURBS surfaces. We show how to realize an iterative ray-object intersection method for NURBS primitives as an intersection program. Convex hulls are used as tight bounding volumes for the NURBS patches to minimize the number of fragments to be processed. In addition, we developed a trimming algorithm for the GPU that works with an exact representation of the trimming curves. First experiments with our implementation show that real-time rendering of medium complex scenes is possible on current graphics hardware.
We introduce a new concept for improved interaction with complex scenes: multi-frame rate rendering and display. Multi-frame rate rendering produces a multi-frame rate display by optically or digitally compositing the results of asynchronously running image generators. Interactive parts of a scene are rendered at the highest possible frame rates while the rest of the scene is rendered at regular frame rates. The composition of image components generated with different update rates may cause certain visual artifacts, which can be partially overcome with our rendering techniques. The results of a user study confirm that multi-frame rate rendering can significantly improve the interaction performance while slight visual artifacts are either not even recognized or gladly tolerated by users. Overall, digital composition shows the most promising results, since it introduces the least artifacts while requiring the transfer of frame buffer content between different image generators.
Virtual and immersive virtual reality, VR and iVR, provide flexible and engaging learning opportunities, such as virtual field trips (VFTs). Despite its growing popularity for education, understanding how iVR compared to non-immersive media influences learning is still challenged by mixed empirical results and a lack of longitudinal research. This study addresses these issues through an experiment in which undergraduate geoscience students attended two temporally separated VFT sessions through desktop virtual reality (dVR) or iVR, with their learning experience and outcomes measured after each session. Our results show higher levels of enjoyment and satisfaction as well as a stronger sense of spatial presence in iVR students in both VFTs compared to dVR students, but no improvement in learning outcomes in iVR compared to dVR. More importantly, we found that there exists a critical interaction between VR condition and repeated participation in VFTs indicating that longitudinal exposure to VFTs improves knowledge performance more when learning in iVR than through dVR. These results suggest that repeated use of iVR may be beneficial in sustaining students’ emotional engagement and compensating the initial deficiency in their objective learning outcomes compared to other less immersive technologies.
Access control is an important aspect of shared virtual environments. Resource access may not only depend on prior authorization, but also on context of usage such as distance or position in the scene graph hierarchy. In virtual worlds that allow user-created content, participants must be able to define and exchange access rights to control the usage of their creations. Using object capabilities, fine-grained access control can be exerted on the object level. We describe our experiences in the application of the object-capability model for access control to object-manipulation tasks common to collaborative virtual environments. We also report on a prototype implementation of an object-capability safe virtual environment that allows anonymous, dynamic exchange of access rights between users, scene elements, and autonomous actors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.