Channel vocoders using either tone or band-limited noise carriers have been used in experiments to simulate cochlear implant processing in normal-hearing listeners. Previous results from these experiments have suggested that the two vocoder types produce speech of nearly equal intelligibility in quiet conditions. The purpose of this study was to further compare the performance of tone and noise-band vocoders in both quiet and noisy listening conditions. In each of four experiments, normal-hearing subjects were better able to identify tone-vocoded sentences and vowel-consonant-vowel syllables than noise-vocoded sentences and syllables, both in quiet and in the presence of either speech-spectrum noise or two-talker babble. An analysis of consonant confusions for listening in both quiet and speech-spectrum noise revealed significantly different error patterns that were related to each vocoder's ability to produce tone or noise output that accurately reflected the consonant's manner of articulation. Subject experience was also shown to influence intelligibility. Simulations using a computational model of modulation detection suggest that the noise vocoder's disadvantage is in part due to the intrinsic temporal fluctuations of its carriers, which can interfere with temporal fluctuations that convey speech recognition cues.
Objectives: 1) Measure sentence recognition in co-located and spatially separated target and masker configurations in school-aged children with unilateral hearing loss (UHL) and with normal hearing (NH). 2) Compare self-reported hearing-related quality of life (QoL) scores in school-aged children with UHL and NH. Design: Listeners were school-aged children (6-12 yrs) with permanent UHL (n = 41) or NH (n = 35) and adults with NH (n = 23). Sentence reception thresholds (SRTs) were measured using HINT-C sentences in quiet and in the presence of 2-talker child babble or a speech-shaped noise masker in target/masker spatial configurations: 0/0, 0/−60, 0/+60, or 0/±60 degrees azimuth. Maskers were presented at a fixed level of 55 dBA, while the level of the target sentences varied adaptively to estimate the SRT. Hearing-related QoL was measured using the Hearing Environments and Reflection on Quality of Life (HEAR-QL-26) questionnaire for child subjects. Results: As a group, subjects with unaided UHL had higher (poorer) SRTs than age-matched peers with NH in all listening conditions. Effects of age, masker type, and spatial configuration of target and masker signals were found. Spatial release from masking was significantly reduced in conditions where the masker was directed toward UHL subjects' normal-hearing ear. Hearingrelated QoL scores were significantly poorer in subjects with UHL compared to those with NH. Degree of UHL, as measured by four-frequency pure-tone average (PTA), was significantly correlated with SRTs only in the 2 conditions where the masker was directed towards subjects' normal-hearing ear, although the unaided Speech Intelligibility Index (SII) at 65 dB SPL was significantly correlated with SRTs in 4 conditions, some of which directed the masker to the impaired ear or both ears. Neither PTA nor unaided SII was correlated with QoL scores.
Two experiments investigated the impact of reverberation and masking on speech understanding using cochlear implant (CI) simulations. Experiment 1 tested sentence recognition in quiet. Stimuli were processed with reverberation simulation (T=0.425, 0.266, 0.152, and 0.0 s) and then either processed with vocoding (6, 12, or 24 channels) or were subjected to no further processing. Reverberation alone had only a small impact on perception when as few as 12 channels of information were available. However, when the processing was limited to 6 channels, perception was extremely vulnerable to the effects of reverberation. In experiment 2, subjects listened to reverberated sentences, through 6- and 12-channel processors, in the presence of either speech-spectrum noise (SSN) or two-talker babble (TTB) at various target-to-masker ratios. The combined impact of reverberation and masking was profound, although there was no interaction between the two effects. This differs from results obtained in subjects listening to unprocessed speech where interactions between reverberation and masking have been shown to exist. A speech transmission index (STI) analysis indicated a reasonably good prediction of speech recognition performance. Unlike previous investigations, the SSN and TTB maskers produced equivalent results, raising questions about the role of informational masking in CI processed speech.
The University of Massachusetts CI formula uses HINT sentence scores and the hearing history of both ears to predict the variance in postoperative monosyllabic word scores. This model compares favorably with previous studies that relied on Central Institute for the Deaf sentence scores and uses patient data collected by most centers in the United States.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.