2017 25th European Signal Processing Conference (EUSIPCO) 2017
DOI: 10.23919/eusipco.2017.8081390
|View full text |Cite
|
Sign up to set email alerts
|

EEG-based attention-driven speech enhancement for noisy speech mixtures using N-fold multi-channel Wiener filters

Abstract: Abstract-Hearing prostheses have built-in algorithms to perform acoustic noise reduction and improve speech intelligibility. However, in a multi-speaker scenario the noise reduction algorithm has to determine which speaker the listener is focusing on, in order to enhance it while suppressing the other interfering sources. Recently, it has been demonstrated that it is possible to detect auditory attention using electroencephalography (EEG). In this paper, we use multi-channel Wiener filters (MWFs), to filter ou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4

Relationship

7
2

Authors

Journals

citations
Cited by 21 publications
(36 citation statements)
references
References 19 publications
0
36
0
Order By: Relevance
“…Hence, it is essential to understand the effects of this miniaturization on EEG signal processing methods which have been tested with traditional EEG equipment. In this work, the effect of short-distance EEG measurements that arise with miniaturization of EEG sensor devices was investigated within the context of an AAD task, which may be used in future-generation neuro-steered auditory prostheses, e.g., for the cognitive control of hearing aids or cochlear implants [23], [24]. N = 2 N = 4 N = 6 N = 8 Fig.…”
Section: Discussionmentioning
confidence: 99%
“…Hence, it is essential to understand the effects of this miniaturization on EEG signal processing methods which have been tested with traditional EEG equipment. In this work, the effect of short-distance EEG measurements that arise with miniaturization of EEG sensor devices was investigated within the context of an AAD task, which may be used in future-generation neuro-steered auditory prostheses, e.g., for the cognitive control of hearing aids or cochlear implants [23], [24]. N = 2 N = 4 N = 6 N = 8 Fig.…”
Section: Discussionmentioning
confidence: 99%
“…The third choice for speaker separation was 60 degrees, in which case, both the speakers were on the same side of the head. Das et al (2017) found that in a system performing neuro-steered noise suppression at low input SNRs, speaker positions on the same side of the head resulted in a relatively lower improvement in output SNRs compared to symmetric speaker set ups 1 . To balance between the two sides of the head, to avoid introducing a lateralization bias during the training of decoders (Das et al, 2016), we split this condition between two experiments where the speakers were at -30 and -90 degrees, and 30 and 90 degrees.…”
Section: Choice Of Snrsmentioning
confidence: 94%
“…In this study we used the clean speech envelopes, which in a real-life scenario will not be available, for decoding. Many algorithms have been proposed to cope with realistic microphone signals (Aroudi et al, 2018;Das et al, 2017;O'Sullivan et al, 2017;Van Eyndhoven et al, 2017), where it was found that the attention decoding performance was slightly worse than with clean speech envelopes. Therefore, the current results should be considered optimistic in this respect.…”
Section: Road To Neuro-steered Hearing Devicesmentioning
confidence: 99%
“…In all these AAD approaches, access to clean speech streams is necessary. Therefore some integrated demixing and noise suppression algorithms have been developed to grant access to clean speech streams (Aroudi et al, 2018;Das et al, 2017;O'Sullivan et al, 2017;Van Eyndhoven et al, 2017). Researchers have optimized the number and location of concealable miniature EEG electrodes for wearability purposes, minimizing the subsequent loss in performance (Fiedler et al, 2016;Mirkovic et al, 2015;Narayanan Mundanad and Bertrand, 2018).…”
Section: Introductionmentioning
confidence: 99%