Abstract-In this paper, a distributed consensus control approach for vehicular platooning systems is proposed. In formalizing the underlying consensus problem, a realistic vehicle dynamics model is considered and a velocity-dependent spacing-policy between two consecutive vehicles is realized. As a particular case, the approach allows to consider bidirectional vehicle interaction, which improves the cohesion between vehicles in the platoon. Exponential stability of the platoon dynamics is evaluated, also in the challenging scenario in which a limitation on the velocity of one of the vehicles in the platoon is introduced. The theoretical results are experimentally validated using a three-vehicle platoon consisting of (longitudinally) automated vehicles equipped with wireless inter-vehicle communication and radar-based sensing.
Objective. A hearing aid’s noise reduction algorithm cannot infer to which speaker the user intends to listen to. Auditory attention decoding (AAD) algorithms allow to infer this information from neural signals, which leads to the concept of neuro-steered hearing aids. We aim to evaluate and demonstrate the feasibility of AAD-supported speech enhancement in challenging noisy conditions based on electroencephalography recordings. Approach. The AAD performance with a linear versus a deep neural network (DNN) based speaker separation was evaluated for same-gender speaker mixtures using three different speaker positions and three different noise conditions. Main results. AAD results based on the linear approach were found to be at least on par and sometimes even better than pure DNN-based approaches in terms of AAD accuracy in all tested conditions. However, when using the DNN to support a linear data-driven beamformer, a performance improvement over the purely linear approach was obtained in the most challenging scenarios. The use of multiple microphones was also found to improve speaker separation and AAD performance over single-microphone systems. Significance. Recent proof-of-concept studies in this context each focus on a different method in a different experimental setting, which makes it hard to compare them. Furthermore, they are tested in highly idealized experimental conditions, which are still far from a realistic hearing aid setting. This work provides a systematic comparison of a linear and non-linear neuro-steered speech enhancement model, as well as a more realistic validation in challenging conditions.
Objective: A hearing aid's noise reduction algorithm cannot infer to which speaker the user intends to listen to. Auditory attention decoding (AAD) algorithms allow to infer this information from neural signals, which leads to the concept of neuro-steered hearing aids. We aim to evaluate and demonstrate the feasibility of AAD-supported speech enhancement pipelines in challenging noisy conditions without access to clean speech signals. Methods: We evaluated a linear versus a deep neural network (DNN) based speaker separation pipeline, with samegender speaker mixtures for 3 different speaker positions and 3 different noise conditions. Results: AAD results based on the linear approach were found to be at least on par and sometimes even better than pure DNN-based approaches in terms of AAD accuracy in all tested conditions. However, when extending the DNN with a linear data-driven beamformer, a performance improvement over the purely linear approach was obtained in the most challenging scenarios. The use of multiple microphones was also found to improve speaker separation and AAD performance over single-microphone systems. Conclusion: Our study shows that neuro-steered speech enhancement, combining the best of both worlds (linear and DNN), results in robust performance. Significance: Recent proof-of-concept studies in this context each focus on a different method in a different experimental setting, which makes it hard to compare them. Furthermore, their idealized experimental conditions only give a rather premature evidence on the viability of the AAD paradigm in a hearing aid context. This work provides a systematic proof-of-concept of neuro-steered speech enhancement in challenging conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.