In order to make full use of their potential to replace experiments in real rooms, auralizations must be as realistic as possible. Recently, it has been shown that for speech, head-tracked binaural auralizations based on measured binaural room impulse responses (BRIRs) can be so realistic, that they become indistinguishable (or nearly so) from the real room [1, 2]. In the present contribution, perceptual comparisons between the auralized and the real room are reported for auralizations based both on measured and simulated BRIRs. In the experiment, subjects sitting in the real room rated the agreement between the real and the auralized room with respect to a number of attributes. The results indicate that for most attributes, the agreement between the auralized and the real room can be very convincing (better than 7.5 on a nine-point scale). This was not only observed for auralizations based on measured BRIRs, but also for those based on simulated BRIRs. In the scenario considered here, the use of individual head-related impulse responses (HRIRs) does not seem to offer any benefit over using HRIRs from a head-and-torso-simulator.
Sound radiation of most natural sources, like human speakers or musical instruments, typically exhibits a spatial directivity pattern. This directivity contributes to the perception of sound sources in rooms, affecting the spatial energy distribution of early reflections and late diffuse reverberation. Thus, for convincing sound field reproduction and acoustics simulation, source directivity has to be considered. Whereas perceptual effects of directivity, such as source-orientation-dependent coloration, appear relevant for the direct sound and individual early reflections, it is unclear how spectral and spatial cues interact for later reflections. Better knowledge of the perceptual relevance of source orientation cues might help to simplify the acoustics simulation. Here, it is assessed as to what extent directivity of a human speaker should be simulated for early reflections and diffuse reverberation. The computationally efficient hybrid approach to simulate and auralize binaural room impulse responses [Wendt et al., J. Audio Eng. Soc. 62, 11 (2014)] was extended to simulate source directivity. Two psychoacoustic experiments assessed the listeners' ability to distinguish between different virtual source orientations when the frequency-dependent spatial directivity pattern of the source was approximated by a direction-independent average filter for different higher reflection orders. The results indicate that it is sufficient to simulate effects of source directivity in the first-order reflections.
Active echolocation of sighted humans using predefined synthetic and self-emitted sounds, as habitually used by blind individuals, was investigated. Using virtual acoustics, distance estimation and directional localization of a wall in different rooms were assessed. A virtual source was attached to either the head or hand with realistic or increased source directivity. A control condition was tested with a virtual sound source located at the wall. Untrained echolocation performance comparable to performance in the control condition was achieved on an individual level. On average, the echolocation performance was considerably lower than in the control condition, however, it benefitted from increased directivity.
Awareness of space, and subsequent orientation and navigation in rooms, is dominated by the visual system. However, humans are able to extract auditory information about their surroundings from early reflections and reverberation in enclosed spaces. To better understand orientation and navigation based on acoustic cues only, three virtual corridor layouts (I-, U-, and Z-shaped) were presented using real-time virtual acoustics in a three-dimensional 86-channel loudspeaker array. Participants were seated on a rotating chair in the center of the loudspeaker array and navigated using real rotation and virtual locomotion by “teleporting” in steps on a grid in the invisible environment. A head mounted display showed control elements and the environment in a visual reference condition. Acoustical information about the environment originated from a virtual sound source at the collision point of a virtual ray with the boundaries. In different control modes, the ray was cast either in view or hand direction or in a rotating, “radar”-like fashion in 90° steps to all sides. Time to complete, number of collisions, and movement patterns were evaluated. Navigation and orientation were possible based on the direct sound with little effect of room acoustics and control mode. Underlying acoustic cues were analyzed using an auditory model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.