2016
DOI: 10.1007/978-3-319-25474-6_41
|View full text |Cite
|
Sign up to set email alerts
|

Neural Segregation of Concurrent Speech: Effects of Background Noise and Reverberation on Auditory Scene Analysis in the Ventral Cochlear Nucleus

Abstract: Concurrent complex sounds (e.g., two voices speaking at once) are perceptually disentangled into separate "auditory objects". This neural processing often occurs in the presence of acoustic-signal distortions from noise and reverberation (e.g., in a busy restaurant). A difference in periodicity between sounds is a strong segregation cue under quiet, anechoic conditions. However, noise and reverberation exert differential effects on speech intelligibility under "cocktail-party" listening conditions. Previous ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…Lexical processing of speech depends too on the precise encoding of vocal spectral modulations [48], [49], [50], [51], [52]. Relative temporal displacements between spectral components of speech may affect intelligibility performance [53]; this may naturally arise from surface reverberation effects that lead to weaker segregation in multispeaker conditions [54], [55]. In another example of dependence on spectral encoding, bandwidth tuning loss at higher frequencies under presbycusis severely limits the ability to target speech in multispeaker conditions.…”
Section: Discussionmentioning
confidence: 99%
“…Lexical processing of speech depends too on the precise encoding of vocal spectral modulations [48], [49], [50], [51], [52]. Relative temporal displacements between spectral components of speech may affect intelligibility performance [53]; this may naturally arise from surface reverberation effects that lead to weaker segregation in multispeaker conditions [54], [55]. In another example of dependence on spectral encoding, bandwidth tuning loss at higher frequencies under presbycusis severely limits the ability to target speech in multispeaker conditions.…”
Section: Discussionmentioning
confidence: 99%
“…Additionally, response fidelity in the IC is greater in reverberant conditions compared to anechoic stimuli that are matched in modulation depth. These findings suggest that subcortical processing that occurs between the cochlear nucleus and auditory cortex may compensate for the degradation of periodicity pitch processing induced by challenging listening conditions, in part by filtering out noise (Sayles et al, 2016). Furthermore, these potential mechanisms may contribute to speech representations in the auditory cortex that are relatively invariant with respect to competing noise and reverberation (Mesgarani et al, 2014), in which normal-hearing individuals have considerable listening abilities in moderately challenging conditions (Poissant et al, 2006).…”
Section: Salient Features Of Com1plex Sounds Are Extracted Subcortmentioning
confidence: 95%
“…However, when the listening environment is contaminated by extraneous noise and reverberation, AVCN pitch processing is degraded (Sayles and Winter, 2008b). These findings suggest that early brainstem networks are not fully capable of faithfully representing ongoing speech sounds in the acoustically challenging conditions that we often experience (Sayles et al, 2016).…”
Section: Salient Features Of Com1plex Sounds Are Extracted Subcortmentioning
confidence: 96%
“…Voice-onset time appears to be represented in a similar fashion to the auditory nerve, that is, through a pause in neuronal spiking corresponding to the VOT (Young, 2008). The inferior colliculus plays a crucial role in the process of filtering and sharpening the signal, as well as compensating for the effects of reverberation on the amplitude envelope of the speech signal (Slama & Delgutte, 2015;Suga, 1995), for example when the system perceives vowels such as /a/ and /i/ (Sayles, Stasiak, & Winter, 2016). This early filtering and compensation system appears to help the primary auditory cortex further up in the hierarchy fulfil important functions, such as processing speech sounds as robust and invariant categories in conditions marked by noise or reverberation , which may occur in a loud restaurant or cocktail party where we may hear many people speaking at once (Cherry, 1953).…”
Section: Subcortical Network and The Extraction Of Acoustic Features ...mentioning
confidence: 99%