2012
DOI: 10.1121/1.3662074
|View full text |Cite
|
Sign up to set email alerts
|

The relative phonetic contributions of a cochlear implant and residual acoustic hearing to bimodal speech perception

Abstract: The addition of low-passed (LP) speech or even a tone following the fundamental frequency (F0) of speech has been shown to benefit speech recognition for cochlear implant (CI) users with residual acoustic hearing. The mechanisms underlying this benefit are still unclear. In this study, eight bimodal subjects (CI users with acoustic hearing in the non-implanted ear) and eight simulated bimodal subjects (using vocoded and LP speech) were tested on vowel and consonant recognition to determine the relative contrib… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

8
46
0
1

Year Published

2013
2013
2023
2023

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 42 publications
(55 citation statements)
references
References 75 publications
8
46
0
1
Order By: Relevance
“…The DDE-2 test was used to assess subjects' ability in dictation [Sheffield and Zeng, 2012] and tested overall speech recognition ability including hearing function, short-term memory, working memory, signal integration, and writing skills; the test was performed in silence (DS) and noise (DN) conditions. The stimuli included words, nonwords, and homophone not homograft sentences.…”
Section: Ethics Statementmentioning
confidence: 99%
“…The DDE-2 test was used to assess subjects' ability in dictation [Sheffield and Zeng, 2012] and tested overall speech recognition ability including hearing function, short-term memory, working memory, signal integration, and writing skills; the test was performed in silence (DS) and noise (DN) conditions. The stimuli included words, nonwords, and homophone not homograft sentences.…”
Section: Ethics Statementmentioning
confidence: 99%
“…First, the lowfrequency signal may provide the listener with segmental speech cues (voicing, manner of articulation, and partial F1 frequency cues) that are either complementary to, or redundant with, segmental cues available through the CI. By integrating the available speech cues across ears, the listener may be able to improve performance relative to performance with the CI alone (Kong and Braida, 2011;Sheffield and Zeng, 2012;Visram et al, 2012a;Yang and Zeng, 2013). Second, harmonicity cues contained in the low-frequency acoustic signal may improve listeners' ability to segment syllable, word, and phrase boundaries, thereby helping them to accurately decode spectrally degraded signals from the CI ear (Spitzer et al, 2009;Zhang et al, 2010;Kong et al, 2015).…”
mentioning
confidence: 99%
“…For a number of years it was assumed that the improved fine pitch information achieved with the low-frequency acoustic stimulation can be combined with the relatively weak pitch information from the electric stimulation to account for the EAS benefit in noise. Of particular interest has been the role of the fundamental frequency, which is poorly conveyed by cochlear implants but accurately represented by the fine spectral and temporal acoustic cues available with residual low frequency hearing (Brown & Bacon, 2010;Sheffield & Zeng, 2012). Qin and Oxenham (2005) demonstrated the reduced ability to use fundamental frequency to segregate competing signals in a simulation of cochlear implant listening.…”
Section: Participants Also Completed the University Of Washington CLImentioning
confidence: 99%
“…Another possibility proposed by Sheffield & Zeng (2012) is that the low frequency acoustic signal provides information about the target speech itself which assists speech recognition. They believed this might occur instead of, or in addition to glimpsing and/or sound source segregation.…”
Section: Participants Also Completed the University Of Washington CLImentioning
confidence: 99%