The project R01DC015429 “Open community platform for hearing aid algorithm research” provides a software platform for real-time, low-latency audio signal processing: the open Master Hearing Aid (openMHA). It contains a versatile set of basic and advanced methods for hearing aid processing, as well as tools and manuals enabling the design of own setups for algorithm development and evaluation. Documentation is provided for different user levels, in particular for audiologists, application engineers and algorithm designers. The software runs on various computer systems including lab setups and portable setups. Portable setups are of particular interest for the evaluation of new methods in real-word scenarios. In addition to standard off-the-shelf hardware, a portable, integrated research platform for openMHA is provided in conjunction with the SBIR project R44DC016247. This contribution introduces openMHA and discusses the usage and possible application scenarios of the portable openMHA setup in hearing research. The opportunity is given to try a smartphone-based self-fitting application for the portable openMHA, and to learn about the flexible configuration and remote control of openMHA running a typical hearing aid processing chain. Furthermore, a discussion and exchange of ideas on current challenges and future developments is offered.
The NIDCD has recently funded a number of projects to develop portable signal processing tools that enable real-time processing of the acoustic environment. The overarching goal is to provide a large group of researchers with the means to efficiently develop and evaluate, in collaborative multi-center environments, novel signal processing schemes, individualized fitting procedures, and technical solutions and services for hearing apparatus such as hearing aids and assistive listening devices. We report on the specific goals and results of two such projects. In one of them (R01DC015429), an open source software platform for real-time runtime environments is developed: The open Master Hearing Aid (openMHA). It provides an extendible set of algorithms for hearing aid signal processing and runs under Linux, Windows, and Mac operating systems on standard PC platforms and on small-scale ARM-based boards. An optimized version of openMHA is provided for the companion SBIR project (R44DC016247), which is a portable, rigid, versatile, and wearable platform featuring an ARM Cortex®-A8 processor. The resulting Portable Hearing Aid Community Platform consists of both hardware elements to provide the advanced desired functionality and software routines to provide for all the features that researchers may need to develop new algorithms.
Recent advancements in neuroscientific research and miniaturized ear-electroencephalography (EEG) technologies have led to the idea of employing brain signals as additional input to hearing aid algorithms. The information acquired through EEG could potentially be used to control the audio signal processing of the hearing aid or to monitor communication-related physiological factors. In previous work, we implemented a research platform to develop methods that utilize EEG in combination with a hearing device. The setup combines currently available mobile EEG hardware and the so-called Portable Hearing Laboratory (PHL), which can fully replicate a complete hearing aid. Audio and EEG data are synchronized using the Lab Streaming Layer (LSL) framework. In this study, we evaluated the setup in three scenarios focusing particularly on the alignment of audio and EEG data. In Scenario I, we measured the latency between software event markers and actual audio playback of the PHL. In Scenario II, we measured the latency between an analog input signal and the sampled data stream of the EEG system. In Scenario III, we measured the latency in the whole setup as it would be used in a real EEG experiment. The results of Scenario I showed a jitter (standard deviation of trial latencies) of below 0.1 ms. The jitter in Scenarios II and III was around 3 ms in both cases. The results suggest that the increased jitter compared to Scenario I can be attributed to the EEG system. Overall, the findings show that the measurement setup can time-accurately present acoustic stimuli while generating LSL data streams over multiple hours of playback. Further, the setup can capture the audio and EEG LSL streams with sufficient temporal accuracy to extract event-related potentials from EEG signals. We conclude that our setup is suitable for studying closed-loop EEG & audio applications for future hearing aids.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.