This paper discusses the use of human-scale immersive virtual environments as a platform for developing technologies for embodied digital interaction. We propose a three-body framework of spatial typologies in the realm of collaborative room-centered immersive systems that governs the relationship between the immersants, the hardware system of the environment, and the presented virtual environment. Through the lens of this framework, we present several use cases in the Collaborative-Research Augmented Immersive Virtual Environment Laboratory (CRAIVE-Lab) at Rensselaer Polytechnic Institute. Through the narratives of these use cases, it is argued that the proposed framework not only affords the design of interaction systems in room-centered virtual environments with spatial awareness, but also engenders new approaches to envision system design and its integration to the built environment. This framework intends to emphasize the significance of spatial thinking in expanding the creative potentials of body-driven interactive immersive systems.
Immersive rooms, a type of virtual reality system consisting of human-scale panoramic visual and acoustic display systems and distributed sensing apparatus for occupant motion, have been increasingly adopted for dynamic and interactive applications. While these applications enable multi-user audiovisual immersion and navigation from a single physical location, they have yet to propagate along multiple homogeneous system infrastructures in a networked manner. In this work, we intend to co-locate two physically-remote immersive rooms – at EMPAC and the CRAIVE-Lab, respectively – in a single system of shared environments developed in Unity and embedded with virtual soundscapes. This system actively monitors spatial properties of both immersive rooms’ dynamic virtual footprint and their corresponding occupants. It generates virtual sound sources. both procedurally and through spatially-aware user inputs. The sound sources are rendered in real time via an algorithm synthesizing a ray-traced early reflection window and a parameterized late reverberation estimate from in-scene geometries. The co-located virtual soundscapes, displayed in individual immersive rooms through their respective multi-channel wave field synthesis loudspeaker systems, are shared as such that the user interaction in one physical location has holistic effects on the experience of virtual environments across all associated physical locations. [Work supported by NSF IIS-1909229 & CNS-1229391.]
State-of-the-art schemata of immersive audiovisual system design mostly rely on in-situ stand-up construction with footings and rigid structural supports, an approach limited by low mobility and long set-up time. In this work, a new concept of audiovisual system design for a collaborative Immersive Virtual Environment with flexible and deployable projection elements and modular assemblies, is proposed. Drawing on stand-up configuration from Rensselaer’s Collaborative-Research Augmented Immersive Virtual Environment Laboratory (CRAIVE-Lab), a foundationless rectangular panoramic display with round corners is used, incorporating a motorized roll-up framework with mountable fillets. This set-up is then accompanied by a unitized 60-channel Wave Field Synthesis (WFS) linear loudspeaker array. The proposed audiovisual system calibrates the spatial audiovisual rendering by an integrated use of game-engine-based 3-D virtual environments (made in Unity and Unreal) and Max/MSP-based sonification utilities. In particular, an equirectangular transform is applied in virtual cameras and render textures to remove distortion effects from screen geometry. This transform is shared with the WFS array for a congruent presentation of audiovisual content.
This work, situated at Rensselaer’s Collaborative-Research Augmented Immersive Virtual Environment Laboratory (CRAIVE-Lab), demonstrates a system that utilizes the facility’s panoramic display and multichannel wave field synthesis loudspeaker array to simulate navigable human-scale urban environments with automatically generated virtual soundscapes. The system positions the CRAIVE-Lab’s virtual footprint within the Unity game engine and provides it with the capability to move within virtual space. With geo-location input, the system uses ArcGIS to extract geospatial features, suchas urban topologies and building extrusions. The same input is also used to retrieve real-time weather data from open-source databases (i.e., OpenWeather). Based upon the extracted information, the system updates acoustic signatures of the virtual surroundings by performing a multi-channelray-tracing analysis at a fixed time frame. The resulting signatures are then used to generate environmental noise profiles and process auto-generatedvirtual sound sources present in the environment using wave field synthesis and an extension of multiple audio datasets typically used for model training in urban sound classification (i.e., UrbanSound8K). We present the results as part of an in situ audiovisual experience where users can stand in the CRAIVE-Lab’s physical enclosure and walk about the virtual landscape in which they are immersed.
Existing auralization frameworks for interactive virtual environments have found applications in simulating acoustic conditions for binaural listening and real-time audiovisual navigation. This work, situated at the Collaborative-Research Augmented Immersive Virtual Environment Laboratory (CRAIVE-Lab), extends elements of these frameworks for human-scale interactive audiovisual display with movement of virtual sound sources and non-static virtual representation of the facility’s footprint. The work involves the development of an adaptive acoustic ray-tracing prototype that is capable of generating impulse responses for individual virtual loudspeaker representation based upon changes of room orientation in virtual space at runtime. Through the integrated use of game engines (i.e., Unity and Unreal Engine), the prototype is presented in the context of dynamic audiovisual display, and actively analyzes the virtual scene geometries using a multi-detailed rendering approach. With both reconstructed high-resolution 3D models of existing spaces, and automatically generated virtual landscapes from geo-spatial data, the developed system is evaluated both in terms of computational efficiency, and in terms of conventional room acoustics parameters using model-based acoustic energy decay analysis across listening region. [Work supported by the Cognitive Immersive Systems Laboratory (CISL) and NSF IIS-1909229.]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.