Estimating the basic acoustic parameters of conversational speech in noisy real-world conditions has been an elusive task in hearing research. Nevertheless, these data are essential ingredients for speech intelligibility tests and fitting rules for hearing aids. Previous surveys did not provide clear methodology for their acoustic measurements and setups, were opaque about their samples, or did not control for distance between the talker and listener, even though people are known to adapt their distance in noisy conversations. In the present study, conversations were elicited between pairs of people by asking them to play a collaborative game that required them to communicate. While performing this task, the subjects listened to binaural recordings of different everyday scenes, which were presented to them at their original sound pressure level (SPL) via highly open headphones. Their voices were recorded separately using calibrated headset microphones. The subjects were seated inside an anechoic chamber at 1 and 0.5 m distances. Precise estimates of realistic speech levels and signal-to-noise ratios (SNRs) were obtained for the different acoustic scenes, at broadband and third octave levels. It is shown that with acoustic background noise at above approximately 69 dB SPL at 1 m distance, or 75 dB SPL at 0.5 m, the average SNR can become negative. It is shown through interpolation of the two conditions that if the conversation partners would have been allowed to optimize their positions by moving closer to each other, then positive SNRs should be only observed above 75 dB SPL. The implications of the results on speech tests and hearing aid fitting rules are discussed.
Everyday listening environments are characterized by far more complex spatial, spectral and temporal sound field distributions than the acoustic stimuli that are typically employed in controlled laboratory settings. As such, the reproduction of acoustic listening environments has become important for several research avenues related to sound perception, such as hearing loss rehabilitation, soundscapes, speech communication, auditory scene analysis, automatic scene classification, and room acoustics. However, the recordings of acoustic environments that are used as test material in these research areas are usually designed specifically for one study, or are provided in custom databases that cannot be universally adapted, beyond their original application. In this work we present the Ambisonic Recordings of Typical Environments (ARTE) database, which addresses several research needs simultaneously: realistic audio recordings that can be reproduced in 3D, 2D, or binaurally, with known acoustic properties, including absolute level and room impulse response. Multichannel higher-order ambisonic recordings of 13 realistic typical environments (e.g., office, cafè, dinner party, train station) were processed, acoustically analyzed, and subjectively evaluated to determine their perceived identity. The recordings are delivered in a generic format that may be reproduced with different hardware setups, and may also be used in binaural, or single-channel setups. Room impulse responses, as well as detailed acoustic analyses, of all environments supplement the recordings. The database is made open to the research community with the explicit intention to expand it in the future and include more scenes.
The concept of complex acoustic environments has appeared in several unrelated research areas within acoustics in different variations. Based on a review of the usage and evolution of this concept in the literature, a relevant framework was developed, which includes nine broad characteristics that are thought to drive the complexity of acoustic scenes. The framework was then used to study the most relevant characteristics for stimuli of realistic, everyday, acoustic scenes: multiple sources, source diversity, reverberation, and the listener’s task. The effect of these characteristics on perceived scene complexity was then evaluated in an exploratory study that reproduced the same stimuli with a three-dimensional loudspeaker array inside an anechoic chamber. Sixty-five subjects listened to the scenes and for each one had to rate 29 attributes, including complexity, both with and without target speech in the scenes. The data were analyzed using three-way principal component analysis with a (2 3 2) Tucker3 model in the dimensions of scales (or ratings), scenes, and subjects, explaining 42% of variation in the data. “Comfort” and “variability” were the dominant scale components, which span the perceived complexity. Interaction effects were observed, including the additional task of attending to target speech that shifted the complexity rating closer to the comfort scale. Also, speech contained in the background scenes introduced a second subject component, which suggests that some subjects are more distracted than others by background speech when listening to target speech. The results are interpreted in light of the proposed framework.
Everyday environments impose acoustical conditions on speech communication that require interlocutors to adapt their behavior to be able to hear and to be heard. Past research has focused mainly on the adaptation of speech level, while few studies investigated how interlocutors adapt their conversational distance as a function of noise level. Similarly, no study tested the interaction between distance and speech level adaptation in noise. In the present study, participant pairs held natural conversations while binaurally listening to identical noise recordings of different realistic environments (range of 53–92 dB sound pressure level), using acoustically transparent headphones. Conversations were in standing or sitting (at a table) conditions. Interlocutor distances were tracked using wireless motion-capture equipment, which allowed subjects to move closer or farther from each other. The results show that talkers adapt their voices mainly according to the noise conditions and much less according to distance. Distance adaptation was highest in the standing condition. Consequently, mainly in the loudest environments, listeners were able to improve the signal-to-noise ratio (SNR) at the receiver location in the standing condition compared to the sitting condition, which became less negative. Analytical approximations are provided for the conversational distance as well as the receiver-related speech and SNR.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.