The present study investigated the effects of expansion on the objective and subjective performance of 20 hearing instrument users fitted binaurally with digital ITE products. Objective performance was evaluated in quiet using the Connected Speech Test and in noise using the Hearing in Noise Test. Subjective performance was evaluated in two ways: (a) by having each participant rate their satisfaction regarding the amount of noise reduction they perceived in each expansion condition on a daily basis and (b) by having each participant indicate which expansion condition they preferred following the completion of a two-week trial. Results indicated that expansion significantly reduced low-level speech perception performance; however, satisfaction and preference ratings significantly increased when using expansion. The effect of degree of hearing loss, expansion kneepoint, and expansion ratio on the effectiveness of expansion for a given listener was discussed.
The relative importance and absolute contributions of various spectral regions to speech intelligibility under conditions of either neutral or predictable sentential context were examined. Specifically, the frequency-importance functions for a set of monosyllabic words embedded in a highly predictive sentence context versus a sentence with little predictive information were developed using Articulation Index (Al) methods. Forty-two young normal-hearing adults heard sentences presented at signal-to-noise ratios from –8 to +14 dB in a noise shaped to conform to the peak spectrum of the speech. Results indicated only slight differences in ⅓-octave importance functions due to differences in semantic context, although the crossovers differed by a constant 180 Hz. Methodological and theoretical aspects of parameter estimation in the Al model are discussed. The results suggest that semantic context, as defined by these conditions, may alter frequency-importance relationships in addition to the dynamic range over which intelligibility rises.
Hearing instrument users prefer the use of multichannel expansion despite the fact multichannel expansion may significantly reduce the recognition of low-level speech in quiet and in noise. Although restricting expansion to Channels 1 and 2 (i.e., 2000 Hz and below) maintained subjective benefit for wide dynamic range compression hearing instrument users, the recognition of low-level speech was not completely preserved.
The problem of combining the outputs of an array of microphones as a single input for a hearing aid is investigated. Emphasis is placed on the conservative prediction of realistically achievable performance gains provided by the array over a single microphone. Performance improvement is measured as a change in the speech reception threshold (SRT) between single microphone and multimicrophone conditions. Consistent with previous work, predictions of this change in SRT using intelligibility averaged gain, [symbol: see text] are shown to be good. Consequently, this measure is used, along with changes in signal-to-noise ratios (SNRs), to evaluate array performance. The results presented include the effects of acoustic headshadow, small room reverberation, microphone placement uncertainty, and desired speaker location uncertainty. It is in this context that realistic predictions of speech enhancement provided by robust adaptive microphone array processors are discussed. Performance improvements are demonstrated relative to the "best" single microphone in the array for three types of spatial filters: Fixed, robust block processed, and robust adaptive. The performance of the robust block processed arrays is shown to be attainable with adaptive implementations. One fundamental criterion employed in robust beamformer design directly limits the amount of cancellation of the desired signal that can occur.
The effect of high speech presentation levels on consonant recognition and feature transmission was assessed in eight participants with normal hearing. Consonant recognition in noise (0 dB signal-to-noise ratio) was measured at five overall speech levels ranging from 65 to 100 dB SPL. Consistent with the work of others, overall percent correct performance decreased as the presentation level of speech increased [e.g., G. A. Studebaker, R. L. Sherbecoe, D. M. McDaniel, and C. A. Gwaltney, J. Acoust. Soc. Am. 105(4), 2431-2444 (1999)]. Confusion matrices were analyzed in terms of relative percent information transmitted at each speech presentation level, as a function of feature. Six feature sets (voicing, place, nasality, duration, frication, and sonorance) were analyzed. Results showed the feature duration (long consonant duration fricatives) to be most affected by increases in level, while the voicing feature was relatively unaffected by increases in level. In addition, alveolar consonants were substantially affected by level, while palatal consonants were not. While the underlying mechanisms responsible for decreases in performance with level increases are unclear, an analysis of common error patterns at high levels suggests that saturation of the neural response and/or a loss of neural synchrony may play a role.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.