Hearing loss is one of the most common conditions affecting older adults worldwide. Frequent complaints from the users of modern hearing aids include poor speech intelligibility in noisy environments and high cost, among other issues. However, the signal processing and audiological research needed to address these problems has long been hampered by proprietary development systems, underpowered embedded processors, and the difficulty of performing tests in real-world acoustical environments. To facilitate existing research in hearing healthcare and enable new investigations beyond what is currently possible, we have developed a modern, open-source hearing research platform, Open Speech Platform (OSP). This paper presents the system design of the complete OSP wearable platform, from hardware through firmware and software to user applications. The platform provides a complete suite of basic and advanced hearing aid features which can be adapted by researchers. It serves web apps directly from a hotspot on the wearable hardware, enabling users and researchers to control the system in real time. In addition, it can simultaneously acquire high-quality electroencephalography (EEG) or other electrophysiological signals closely synchronized to the audio. All of these features are provided in a wearable form factor with enough battery life for hours of operation in the field. INDEX TERMS Hearing aids (HAs), wearable computers, speech processing, field programmable gate arrays (FPGAs), electrophysiology (EEG), system-level design, open source hardware, embedded software, Internet of Things, research initiatives.
We have previously reported a realtime, open-source speech-processing platform (OSP) for hearing aids (HAs) research. In this contribution, we describe a wearable version of this platform to facilitate audiological studies in the lab and in the field. The system is based on smartphone chipsets to leverage power efficiency in terms of FLOPS/watt and economies of scale. We present the system architecture and discuss salient design elements in support of HA research. The ear-level assemblies support up to 4 microphones on each ear, with 96 kHz, 24 bit codecs. The wearable unit runs OSP Release 2018c on top of 64-bit Debian Linux for binaural HA with an overall latency of 5.6 ms. The wearable unit also hosts an embedded web server (EWS) to monitor and control the HA state in realtime. We describe three example web apps in support of typical audiological studies they enable. Finally, we describe a baseline speech enhancement module included with Release 2018c, and describe extensions to the algorithms as future work.
Convincing simulation of diffraction around obstacles is critical in modeling sound propagation in virtual environments. Due to the computational complexity of large-scale wavefield simulations, ray-based models of diffraction are used in real-time interactive multimedia applications. Among popular diffraction models, the Biot-Tolstoy-Medwin (BTM) edge diffraction model is the most accurate, but it suffers from high computational complexity and hence is difficult to apply in real time. This paper introduces an alternative ray-based approach to approximating diffraction, called Volumetric Diffraction and Transmission (VDaT). VDaT is a volumetric diffraction model, meaning it performs spatial sampling of paths along which sound can traverse the scene around obstacles. VDaT uses the spatial sampling results to estimate the BTM edge-diffraction amplitude response and path length, with a much lower computational cost than computing BTM directly. On average, VDaT matches BTM results within 1–3 dB over a wide range of size scales and frequencies in basic cases, and VDaT can handle small objects and gaps better than comparable state-of-the-art real-time diffraction implementations. A GPU-parallelized implementation of VDaT is shown to be capable of simulating diffraction on thousands of direct and specular reflection path segments in small-to-medium-size scenes, within strict real-time constraints and without any precomputed scene information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.