Physical and sexual abuse are common and associated with increased medical disease and health care utilization among HIV-infected women.
Discrimination of temporal patterns has been suggested as a relevant process in speech recognition by subjects with normal hearing [Sorkin, J. Acoust. Soc. Am. 87, 1695-1701 (1990)]. This paper investigates whether performance of Nucleus multichannel cochlear implant subjects on a temporal pattern discrimination task is an efficient and valid psychophysical measure of speech recognition ability. Stimuli consisted of temporal sequences defined by twelve 35-ms tones and eleven randomly generated temporal gaps separating the tones. A fixed-level same/different paradigm was used to measure the discriminability of these sequences as a function of their average correlation across a block of trials. On each trial, the "standard" sequence was generated randomly by drawing gap durations from a Gaussian distribution. The gaps of the comparison sequence were generated in a similar fashion with a specified average correlation with the gaps of the first sequence. Performance of implanted and normal hearing subjects decreased monotonically with increasing average sequence correlation. However, performance across implanted subjects ranged from that observed for acoustically stimulated subjects with audiometrically normal hearing to levels near chance. Comparing these data with measures of speech recognition in the same subjects, we have found that performance on standard speech recognition tests correlates with ability to discriminate among such random temporal patterns.
This paper describes a two-microphone, software-programmable noise-reduction device that was interfaced to the Nucleus Spectra 22 speech processor to act as a front-end noise-reduction preprocessor. The development for the portable processor and the noise-reduction algorithm, more formally known as beamforming, was originally motivated by complaints from individuals who use hearing aids. These individuals complain about a deterioration in performance with increasing levels of background noise. Since individuals who use cochlear implants have similar complaints, it was a natural extension to pose the question: "What benefit, if any, would the beamforming algorithm provide to individuals who use cochlear implants?" To arrive at an answer, the audio interface to the noise reduction device was modified (to make it compatible to the Nucleus Spectra 22 speech processor), and a set of precursory subject experiments were performed. 1 The precursory studies were specific to the Nucleus 22 Channel Cochlear Implant and the Spectra 22 speech processor, both manufactured by Cochlear Corporation and Cochlear Limited. The noisereduction device used in the precursory studies is known as the Alpha II and is developed by AudioLogic Inc.Eleven English-speaking subjects participated in a series of sessions during which they were tested with their own Spectra 22 speech processor and with the Alpha II beamforming algorithm acting to preprocess the input data to their device. The beamforming algorithm was configured for a beam width of ±15° (15° on either side of the listener). Five of the 11 subjects were tested with a no beam algorithm. The no beam program was used to demonstrate that any improvements measured with the Alpha II were caused by the beamforming algorithm and not by the addition of the second microphone.The subjects were tested at signal-tonoise ratios (SNRs) strong enough to degrade their Spectra alone scores in noise relative to their scores in quiet. The average noise score was 29.3% (SD:10.9) with the Spectra alone and 55.5% (SD:27.5) with the Spectra plus the Alpha II beam program. The average scores in noise for the five 1 This data is a subset of data originally submitted to and pending publication in Ear and Hearing. It is offered with the understanding that the manuscript is tutorial and not intended as a forum for results.
Voi�ing is the feature that indicates whether a speech s?un . d I.S quasl'perlodic or aperiodic. It is used perceptually to discriminate pairs of sounds such as Is,zI, Ip,b/.l1,v/, etc. The Nucleus WSP·II1 multichannel speech processor uses a stimulation rate equal to the fundamental frequency of the input speech signal: two pulses are sent in rapid sequence each fundamental period. When speech Is unvoiced and a fundamental frequency can't be determined, a random stimulation rate of approximately too Hz Is used. Therefore this device uses the stimulation rate to encode voicing: unvoiced sounds are delivered using a random rate while voiced sounds are delivered using a more stable rate.. We compared that VOicing encoding strateQY to a new one which uses an extra pulse per period when voicing is present in the input si ll nal. Results were encouraging: one SUbject achieved 100% discrimination with the new strategy (after very limited training), compared to 85% obtained using the old strategy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.