Otoacoustic emission (OAE) tests of the medial-olivocochlear reflex (MOCR) in humans were assessed for viability as clinical assays. Two reflection-source OAEs [TEOAEs: transient-evoked otoacoustic emissions evoked by a 47 dB sound pressure level (SPL) chirp; and discrete-tone SFOAEs: stimulus-frequency otoacoustic emissions evoked by 40 dB SPL tones, and assessed with a 60 dB SPL suppressor] were compared in 27 normal-hearing adults. The MOCR elicitor was a 60 dB SPL contralateral broadband noise. An estimate of MOCR strength, MOCR%, was defined as the vector difference between OAEs measured with and without the elicitor, normalized by OAE magnitude (without elicitor). An MOCR was reliably detected in most ears. Within subjects, MOCR strength was correlated across frequency bands and across OAE type. The ratio of across-subject variability to within-subject variability ranged from 2 to 15, with wideband TEOAEs and averaged SFOAEs giving the highest ratios. MOCR strength in individual ears was reliably classified into low, normal, and high groups. SFOAEs using 1.5 to 2 kHz tones and TEOAEs in the 0.5 to 2.5 kHz band gave the best statistical results. TEOAEs had more clinical advantages. Both assays could be made faster for clinical applications, such as screening for individual susceptibility to acoustic trauma in a hearing-conservation program. [http://dx
The effects of audibility and age on masking for sentences in continuous and interrupted noise were examined in listeners with real and simulated hearing loss. The absolute thresholds of each of ten listeners with sensorineural hearing loss were simulated in normal-hearing listeners through a combination of spectrally-shaped threshold noise and multi-band expansion for octave bands with center frequencies from 0.25-8 kHz. Each individual hearing loss was simulated in two groups of three normal-hearing listeners ͑an age-matched and a non-age-matched group͒. The speech-to-noise ratio ͑S/N͒ for 50%-correct identification of hearing in noise test ͑HINT͒ sentences was measured in backgrounds of continuous and temporally-modulated ͑10 Hz square-wave͒ noise at two overall levels for unprocessed speech and for speech that was amplified with the NAL-RP prescription. The S/N in both continuous and interrupted noise of the hearing-impaired listeners was relatively well-simulated in both groups of normal-hearing listeners. Thus, release from masking ͑the difference in S/N obtained in continuous versus interrupted noise͒ appears to be determined primarily by audibility. Minimal age effects were observed in this small sample. Observed values of masking release were compared to predictions derived from intelligibility curves generated using the extended speech intelligibility index ͑ESII͒ ͓Rhebergen et al. ͑2006͒. J. Acoust. Soc. Am. 120, 3988-3997͔.
Temporal-envelope cues appear to play a large role in the identification of environmental sounds through cochlear implants. The finer distinctions made by the HP group compared with the LP group may be related to a better ability both to resolve temporal differences and to use gross spectral cues. These findings are qualitatively consistent with patterns of confusions observed in the reception of speech segments through cochlear implants.
The goal of this study was to determine the extent to which the difficulty experienced by impaired listeners in understanding noisy speech can be explained on the basis of elevated tone-detection thresholds. Twenty-one impaired ears of 15 subjects, spanning a variety of audiometric configurations with average hearing losses to 75 dB, were tested for reception of consonants in a speech-spectrum noise. Speech level, noise level, and frequency-gain characteristic were varied to generate a range of listening conditions. Results for impaired listeners were compared to those of normal-hearing listeners tested under the same conditions with extra noise added to approximate the impaired listeners' detection thresholds. Results for impaired and normal listeners were also compared on the basis of articulation indices. Consonant recognition by this sample of impaired listeners was generally comparable to that of normal-hearing listeners with similar threshold shifts listening under the same conditions. When listening conditions were equated for articulation index, there was no clear dependence of consonant recognition on average hearing loss. Assuming that the primary consequence of the threshold simulation in normals is loss of audibility (as opposed to suprathreshold discrimination or resolution deficits), it is concluded that the primary source of difficulty in listening in noise for listeners with moderate or milder hearing impairments, aside from the noise itself, is the loss of audibility.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.