In simulations of electrical-acoustic stimulation (EAS), vocoded speech intelligibility is aided by preservation of low-frequency acoustic cues. However, the speech signal is often interrupted in everyday listening conditions, and effects of interruption on hybrid speech intelligibility are poorly understood. Additionally, listeners rely on information-bearing acoustic changes to understand full-spectrum speech (as measured by cochlea-scaled entropy [CSE]) and vocoded speech (CSE), but how listeners utilize these informational changes to understand EAS speech is unclear. Here, normal-hearing participants heard noise-vocoded sentences with three to six spectral channels in two conditions: vocoder-only (80-8000 Hz) and simulated hybrid EAS (vocoded above 500 Hz; original acoustic signal below 500 Hz). In each sentence, four 80-ms intervals containing high-CSE or low-CSE acoustic changes were replaced with speech-shaped noise. As expected, performance improved with the preservation of low-frequency fine-structure cues (EAS). This improvement decreased for continuous EAS sentences as more spectral channels were added, but increased as more channels were added to noise-interrupted EAS sentences. Performance was impaired more when high-CSE intervals were replaced by noise than when low-CSE intervals were replaced, but this pattern did not differ across listening modes. Utilizing information-bearing acoustic changes to understand speech is predicted to generalize to cochlear implant users who receive EAS inputs.
Listeners utilize information-bearing acoustic changes in the speech signal to understand sentences. This has been demonstrated in full-spectrum speech using cochlea-scaled entropy (CSE; Stilp & Kluender, 2010 PNAS) and in vocoded speech (CSECI; Stilp et al., 2013 JASA). In simulations of electrical-acoustic stimulation (EAS), vocoded speech intelligibility is aided by the preservation of low-frequency acoustic cues. The extent to which listeners rely on information-bearing acoustic changes to understand EAS speech is unclear. Here, normal-hearing listeners were presented noise-vocoded sentences with 3–6 spectral channels in two conditions: 1) vocoder-only (80–8000 Hz, filtered using third-order elliptical filters), and 2) simulated hybrid EAS (vocoded >500 Hz; original acoustic <500 Hz). In each sentence, four 80-ms intervals containing high-CSECI or low-CSECI acoustic changes were replaced with speech-shaped noise. As expected, performance improved with more channels and the preservation of low-frequency fine-structure cues (EAS). Relative to control vocoded sentences with no noise replacement, performance was impaired more when high-CSECI intervals were replaced by noise than when low-CSECI intervals were replaced in 5- and 6-channel sentences, but not at lower spectral resolutions. This effect maintained across vocoder-only and EAS sentences. Findings support the conclusion that EAS users make use of information-bearing acoustic changes to understand speech.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.