Amplitude modulation (AM) and frequency modulation (FM) are commonly used in communication, but their relative contributions to speech recognition have not been fully explored. To bridge this gap, we derived slowly varying AM and FM from speech sounds and conducted listening tests using stimuli with different modulations in normal-hearing and cochlear-implant subjects. We found that although AM from a limited number of spectral bands may be sufficient for speech recognition in quiet, FM significantly enhances speech recognition in noise, as well as speaker and tone recognition. Additional speech reception threshold measures revealed that FM is particularly critical for speech recognition with a competing voice and is independent of spectral resolution and similarity. These results suggest that AM and FM provide independent yet complementary contributions to support robust speech recognition under realistic listening situations. Encoding FM may improve auditory scene analysis, cochlear-implant, and audiocoding performance.auditory analysis ͉ cochlear implant ͉ neural code ͉ phase ͉ scene analysis A coustic cues in speech sounds allow a listener to derive not only the meaning of an utterance but also the speaker's identity and emotion. Most traditional research has taken a reductionist's approach in investigation of the minimal cues for speech recognition (1). Previous studies using either naturally produced whispered speech (2) or artificially synthesized speech (3, 4) have isolated and identified several important acoustic cues for speech recognition. For example, computers relying on primarily spectral cues and human cochlear-implant listeners relying on primarily temporal cues can achieve a high level of speech recognition in quiet (5-7). As a result, spectral and temporal acoustic cues have been interpreted as built-in redundancy mechanisms in speech recognition (8). However, this redundancy interpretation is challenged by the extremely poor performance of both computers and human cochlear implant users in realistic listening situations where noise is typically present (7, 9).The goal of this study was to delineate the relative contributions of spectral and temporal cues to speech recognition in realistic listening situations. We chose three speech perception tasks that are known to be notoriously difficult for computers and human cochlear-implant users, including speech recognition with a competing voice, speaker recognition, and Mandarin tone recognition. We approached the issue by extracting slowly varying amplitude modulation (AM) and frequency modulation (FM) from a number of frequency bands in speech sounds and testing their relative contributions to speech recognition in acoustic and electric hearing. The AM-only speech has been used in previous studies (3, 10) and is considered to be an acoustic simulation of the cochlear implant (5). Different from previous studies using relatively ''fast'' FM to track formant changes in speech production (4, 11) or fine structure in speech acoustics (12, 13), the ''...
Objectives Assessment of cochlear implant outcomes centers around speech discrimination. Despite dramatic improvements in speech perception, music perception remains a challenge for most cochlear implant users. No standardized test exists to quantify music perception in a clinically practical manner. This study presents the University of Washington Clinical Assessment of Music Perception (CAMP) test as a reliable and valid music perception test for English-speaking, adult cochlear implant users. Design Forty-two cochlear implant subjects were recruited from the University of Washington Medical Center cochlear implant program and referred by two implant manufacturers. Ten normal-hearing volunteers were drawn from the University of Washington Medical Center and associated campuses. A computer-driven, self-administered test was developed to examine three specific aspects of music perception: pitch direction discrimination, melody recognition, and timbre recognition. The pitch subtest used an adaptive procedure to determine just-noticeable differences for complex tone pitch direction discrimination within the range of 1 to 12 semitones. The melody and timbre subtests assessed recognition of 12 commonly known melodies played with complex tones in an isochronous manner and eight musical instruments playing an identical five-note sequence, respectively. Testing was repeated for cochlear implant subjects to evaluate test-retest reliability. Normal-hearing volunteers were also tested to demonstrate differences in performance in the two populations. Results For cochlear implant subjects, pitch direction discrimination just-noticeable differences ranged from 1 to 8.0 semitones (Mean = 3.0, SD = 2.3). Melody and timbre recognition ranged from 0 to 94.4% correct (mean = 25.1, SD = 22.2) and 20.8 to 87.5% (mean = 45.3, SD = 16.2), respectively. Each subtest significantly correlated at least moderately with both Consonant-Nucleus-Consonant (CNC) word recognition scores and spondee recognition thresholds in steady state noise and two-talker babble. Intraclass coefficients demonstrating test-retest correlations for pitch, melody, and timbre were 0.85, 0.92, and 0.69, respectively. Normal-hearing volunteers had a mean pitch direction discrimination threshold of 1.0 semitone, the smallest interval tested, and mean melody and timbre recognition scores of 87.5 and 94.2%, respectively. Conclusions The CAMP test discriminates a wide range of music perceptual ability in cochlear implant users. Moderate correlations were seen between music test results and both Consonant-Nucleus-Consonant word recognition scores and spondee recognition thresholds in background noise. Test-retest reliability was moderate to strong. The CAMP test provides a reliable and valid metric for a clinically practical, standardized evaluation of music perception in adult cochlear implant users.
The goals of the present study were to measure acoustic temporal modulation transfer functions (TMTFs) in cochlear implant listeners and examine the relationship between modulation detection and speech recognition abilities. The effects of automatic gain control, presentation level and number of channels on modulation detection thresholds (MDTs) were examined using the listeners' clinical sound processor. The general form of the TMTF was low-pass, consistent with previous studies. The operation of automatic gain control had no effect on MDTs when the stimuli were presented at 65 dBA. MDTs were not dependent on the presentation levels (ranging from 50 to 75 dBA) nor on the number of channels. Significant correlations were found between MDTs and speech recognition scores. The rates of decay of the TMTFs were predictive of speech recognition abilities. Spectral-ripple discrimination was evaluated to examine the relationship between temporal and spectral envelope sensitivities. No correlations were found between the two measures, and 56% of the variance in speech recognition was predicted jointly by the two tasks. The present study suggests that temporal modulation detection measured with the sound processor can serve as a useful measure of the ability of clinical sound processing strategies to deliver clinically pertinent temporal information.
Abstract-Different from traditional Fourier analysis, a signal can be decomposed into amplitude and frequency modulation components. The speech processing strategy in most modern cochlear implants only extracts and encodes amplitude modulation in a limited number of frequency bands. While amplitude modulation encoding has allowed cochlear implant users to achieve good speech recognition in quiet, their performance in noise is severely compromised. Here, we propose a novel speech processing strategy that encodes both amplitude and frequency modulations in order to improve cochlear implant performance in noise. By removing the center frequency from the subband signals and additionally limiting the frequency modulation's range and rate, the present strategy transforms the fast-varying temporal fine structure into a slowly varying frequency modulation signal. As a first step, we evaluated the potential contribution of additional frequency modulation to speech recognition in noise via acoustic simulations of the cochlear implant. We found that while amplitude modulation from a limited number of spectral bands is sufficient to support speech recognition in quiet, frequency modulation is needed to support speech recognition in noise. In particular, improvement by as much as 71 percentage points was observed for sentence recognition in the presence of a competing voice. The present result strongly suggests that frequency modulation be extracted and encoded to improve cochlear implant performance in realistic listening situations. We have proposed several implementation methods to stimulate further investigation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.