The current study measured neural responses to investigate auditory stream segregation of noise stimuli with or without clear spectral contrast. Sequences of alternating A and B noise bursts were presented to elicit stream segregation in normal-hearing listeners. The successive B bursts in each sequence maintained an equal amount of temporal separation with manipulations introduced on the last stimulus. The last B burst was either delayed for 50% of the sequences or not delayed for the other 50%. The A bursts were jittered in between every two adjacent B bursts. To study the effects of spectral separation on streaming, the A and B bursts were further manipulated by using either bandpass-filtered noises widely spaced in center frequency or broadband noises. Event-related potentials (ERPs) to the last B bursts were analyzed to compare the neural responses to the delay vs. no-delay trials in both passive and attentive listening conditions. In the passive listening condition, a trend for a possible late mismatch negativity (MMN) or late discriminative negativity (LDN) response was observed only when the A and B bursts were spectrally separate, suggesting that spectral separation in the A and B burst sequences could be conducive to stream segregation at the pre-attentive level. In the attentive condition, a P300 response was consistently elicited regardless of whether there was spectral separation between the A and B bursts, indicating the facilitative role of voluntary attention in stream segregation. The results suggest that reliable ERP measures can be used as indirect indicators for auditory stream segregation in conditions of weak spectral contrast. These findings have important implications for cochlear implant (CI) studies—as spectral information available through a CI device or simulation is substantially degraded, it may require more attention to achieve stream segregation.
The purpose of this study was to investigate the roles of spectral overlap and amplitude modulation (AM) rate for stream segregation for noise signals, as well as to test the build-up effect based on these two cues. Segregation ability was evaluated using an objective paradigm with listeners' attention focused on stream segregation. Stimulus sequences consisted of two interleaved sets of bandpass noise bursts (A and B bursts). The A and B bursts differed in spectrum, AM-rate, or both. The amount of the difference between the two sets of noise bursts was varied. Long and short sequences were studied to investigate the build-up effect for segregation based on spectral and AM-rate differences. Results showed the following: (1). Stream segregation ability increased with greater spectral separation. (2). Larger AM-rate separations were associated with stronger segregation abilities. (3). Spectral separation was found to elicit the build-up effect for the range of spectral differences assessed in the current study. (4). AM-rate separation interacted with spectral separation suggesting an additive effect of spectral separation and AM-rate separation on segregation build-up. The findings suggest that, when normal-hearing listeners direct their attention towards segregation, they are able to segregate auditory streams based on reduced spectral contrast cues that vary by the amount of spectral overlap. Further, regardless of the spectral separation they are able to use AM-rate difference as a secondary/weaker cue. Based on the spectral differences, listeners can segregate auditory streams better as the listening duration is prolonged—i.e., sparse spectral cues elicit build-up segregation; however, AM-rate differences only appear to elicit build-up when in combination with spectral difference cues.
Directional microphones significantly improve speech in noise recognition over omnidirectional microphones and allowed for decreased self-perceived listening effort. The dual task used in this study failed to show any differences in listening effort across the experimental conditions and may not be sensitive enough to detect changes in listening effort.
Purpose: To examine the effects of temporal and spectral interference of masking noise on sentence recognition for listeners with cochlear implants (CI) and normal-hearing persons listening to vocoded signals that simulate signals processed through a CI (NH-Sim). Method: NH-Sim and CI listeners participated in the experiments using speech and noise that were processed by bandpass filters. Depending on the experimental condition, the spectra of the maskers relative to that of speech were set to be completely embedded with, partially overlapping, or completely separate from, the speech. The maskers were either steady or amplitude modulated and were presented at +10 dB signal-to-noise ratio. Results: NH-Sim listeners experienced progressively more masking as the masker became more spectrally overlapping with speech, whereas CI listeners experienced masking even when the masker was spectrally remote from the speech signal. Both the NH-Sim and CI listeners experienced significant modulation interference when noise was modulated at a syllabic rate (4 Hz), suggesting that listeners may experience both modulation interference and masking release. Thus, modulated noise has mixed and counteracting effects on speech perception. Conclusion: When the NH-Sim and CI listeners with poor spectral resolution were tested using syllabic-like rates of modulated noise, they tended to integrate or confuse the noise with the speech, causing an increase in speech errors. Optional training programs might be useful for CI listeners who show more difficulty understanding speech in noise.Key Words: cochlear implants, hearing loss, speech perception T ypical environmental noises such as background conversations are temporally varying in frequency and amplitude. Listeners with normal hearing (NH) can take advantage of gaps in these fluctuating maskers. They are able to ''listen in the dips'' of temporally varying noise to extract information about the speech signal, thereby experiencing improvement in speech recognition (e.g., Bernstein & Grant, 2009;Festen & Plomp, 1990;Jin & Nelson, 2006). Such performance improvement in the presence of fluctuating compared to steady-state noise conditions is known as masking release. Previous studies have reported that NH listeners' speech recognition scores could improve by as much as 80 percentage points when noise was modulated versus steady (Jin & Nelson, 2006). However, significant masking release reduction or no masking release has been found in cochlear implant (CI) users or in NH listeners identifying vocoded speech that simulates speech processed by a CI device (NH-Sim; Fu & Nogaki, 2004;Kwon, Perry, Wilhelm, & Healy 2012;Nelson & Jin, 2004;Nelson, Jin, Carney, & Nelson, 2003;Qin & Oxenham, 2003;Stickney, Zeng, Litovsky, & Assmann, 2004). For example, Nelson and colleagues (Nelson & Jin, 2004;Nelson et al., 2003) compared the performance of three listener groups (NH, CI, and NH-Sim) for sentence recognition in the presence of different masking noises, including steady-state noise and gated noise m...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.