Attempts to develop speech enhancement algorithms with improved speech intelligibility for cochlear implant (CI) users have met with limited success. To improve speech enhancement methods for CI users, we propose to perform speech enhancement in a cochlear filter-bank feature space, a feature-set specifically designed for CI users based on CI auditory stimuli. We leverage a convolutional neural network (CNN) to extract both stationary and non-stationary components of environmental acoustics and speech. We propose three CNN architectures:(1) vanilla CNN that directly generates the enhanced signal;(2) spectral-subtraction-style CNN (SS-CNN) that first predicts noise and then generates the enhanced signal by subtracting noise from the noisy signal; (3) Wiener-style CNN (Wiener-CNN) that generates an optimal mask for suppressing noise. An important problem of the proposed networks is that they introduce considerable delays, which limits their real-time application for CI users. To address this, this study also considers causal variations of these networks. Our experiments show that the proposed networks (both causal and non-causal forms) achieve significant improvement over existing baseline systems. We also found that causal Wiener-CNN outperforms other networks, and leads to the best overall envelope coefficient measure (ECM). The proposed algorithms represent a viable option for implementation on the CCi-MOBILE research platform as a pre-processor for CI users in naturalistic environments.
Hearing loss is an increasingly prevalent condition resulting from damage to the inner ear which causes a reduction in speech intelligibility. The societal need for assistive hearing devices has increased exponentially over the past two decades; however, actual human performance with such devices has only seen modest gains relative to advancements in digital signal processing (DSP) technology. A major challenge with clinical hearing technologies is the limited ability to run complex signal processing algorithms requiring high computation power. The CCi-MOBILE platform, developed at UT-Dallas, provides the research community with an open-source, flexible, easy-to-use, software-mediated, powerful computing research interface to conduct a wide variety of listening experiments. The platform supports cochlear implants (CIs) and hearing aids (HAs) independently, as well as bimodal hearing (i.e., a CI in one ear and HA in the contralateral ear). The platform is ideally suited to address hearing research for: both quiet and naturalistic noisy conditions, sound localization, and lateralization. The platform uses commercially available smartphone/tablet devices as portable sound processors and can provide bilateral electric and acoustic stimulation. The hardware components, firmware, and software suite are presented to demonstrate safety to the speech scientist and CI/HA user, highlight user-specificity, and outline various applications of the platform for research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.