2017
DOI: 10.1088/1741-2552/aa7ab4
|View full text |Cite
|
Sign up to set email alerts
|

Neural decoding of attentional selection in multi-speaker environments without access to clean sources

Abstract: Objective People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation amongst many without knowing which speaker the user is attending to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. Translating the successes in AAD research to real-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
94
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 103 publications
(94 citation statements)
references
References 61 publications
0
94
0
Order By: Relevance
“…In recent years, methods for decoding attention to natural speech have been heavily investigated (O'Sullivan et al, 2014;Mirkovic et al, 2015;Akram, Presacco, Simon, Shamma, & Babadi, 2016;Fuglsang, Dau, & Hjortkjaer, 2017;O'Sullivan, Crosse, Di Liberto, & Lalor, 2017;O'Sullivan, Chen, et al, 2017;Denk et al, 2018;Miran et al, 2018). This has, for the most part, been driven by the goal of realizing these algorithms in wearable devices (Fiedler, Obleser, Lunner, & Graversen, 2016;Haghighi, Moghadamfalahi, Akcakaya, Shinn-Cunningham, & Erdogmus, 2017;Mirkovic, Bleichner, De Vos, & Debener, 2016).…”
Section: Discussionmentioning
confidence: 99%
“…In recent years, methods for decoding attention to natural speech have been heavily investigated (O'Sullivan et al, 2014;Mirkovic et al, 2015;Akram, Presacco, Simon, Shamma, & Babadi, 2016;Fuglsang, Dau, & Hjortkjaer, 2017;O'Sullivan, Crosse, Di Liberto, & Lalor, 2017;O'Sullivan, Chen, et al, 2017;Denk et al, 2018;Miran et al, 2018). This has, for the most part, been driven by the goal of realizing these algorithms in wearable devices (Fiedler, Obleser, Lunner, & Graversen, 2016;Haghighi, Moghadamfalahi, Akcakaya, Shinn-Cunningham, & Erdogmus, 2017;Mirkovic, Bleichner, De Vos, & Debener, 2016).…”
Section: Discussionmentioning
confidence: 99%
“…This provides a proof of concept of the neurosciencific phenomenon, but also opens the path to develop neurotechnology that exploits this decoding. Such attentional decoding, in combination with technologically driven source separation of speech sources, could for instance lead to the development of hearing aids that selectively amplify the speaker of interest based on the user's attention 12 .…”
Section: Introductionmentioning
confidence: 99%
“…Auditory attention decoding approaches take advantage of features of the speech signal that are known to be selectively enhanced for attended over unattended speech. Examples of such speech features are for instance the changes in sound intensity over time (the speech envelope) [7][8][9]11 , or the changes in intensity over time across frequency bands (the speech spectogram) 12,13 . Generally, a (regularized) regression is used to learn a mapping from the subject's electrophysiological data (e.g., EEG), to the chosen speech signal features based on data for which the attended speech is known.…”
Section: Introductionmentioning
confidence: 99%
“…Throughout the paper, we assume that the envelopes of the clean speeches are available. Given that this assumption does not hold in practical scenarios, recent algorithms on the extraction of speech envelopes from acoustic mixtures [24,25,26,27,28] can be added as a pre-processing module to our framework.…”
Section: Dynamic Encoding and Decoding Modelsmentioning
confidence: 99%