2018
DOI: 10.1101/504522
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Comparison of Two-Talker Attention Decoding from EEG with Nonlinear Neural Networks and Linear Methods

Abstract: Auditory attention decoding (AAD) through a brain-computer interface has had a flowering of developments since it was first introduced by Mesgarani and Chang (2012) using electrocorticograph recordings. AAD has been pursued for its potential application to hearing-aid design in which an attention-guided algorithm selects, from multiple competing acoustic sources, which should be enhanced for the listener and which should be suppressed. Traditionally, researchers have separated the AAD problem into two stages: … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 34 publications
(65 reference statements)
0
3
0
Order By: Relevance
“…Future work needs to establish the minimum accuracy that would be required for a listener to receive a perceptual benefit from such a device. The use of deep learning for improving attention decoding accuracies has a good track record in scalp-EEG based studies of auditory attention, and the benefits seen in those studies would presumably translate across to wearable EEG studies [41]- [43]. The null distributions of the attention markers are also shown (grey).…”
Section: Discussionmentioning
confidence: 99%
“…Future work needs to establish the minimum accuracy that would be required for a listener to receive a perceptual benefit from such a device. The use of deep learning for improving attention decoding accuracies has a good track record in scalp-EEG based studies of auditory attention, and the benefits seen in those studies would presumably translate across to wearable EEG studies [41]- [43]. The null distributions of the attention markers are also shown (grey).…”
Section: Discussionmentioning
confidence: 99%
“…Leastsquares Ding and Simon, 2012a;Di Liberto et al, 2015;Alickovic et al, 2016, in rewiev;Fiedler et al, 2017Fiedler et al, , 2019Hjortkjaer et al, 2018;Kalashnikova et al, 2018;Lesenfants et al, 2018;Lunner et al, 2018;Verschueren et al, 2018;Wong et al, 2018 Inverse/backward modeling Supervised case: O'Sullivan et al, 2015O'Sullivan et al, , 2017Aroudi et al, 2016;Das et al, 2016Das et al, , 2018Presacco et al, 2016;Biesmans et al, 2017;Fuglsang et al, 2017;Van Eyndhoven et al, 2017;Zink et al, 2017;Bednar and Lalor, 2018;Ciccarelli et al, 2018;Etard et al, 2018;Hausfeld et al, 2018;Narayanan and Bertrand, 2018;Schäfer et al, 2018;Vanthornhout et al, 2018;Verschueren et al, 2018;Wong et al, 2018;Akbari et al, 2019;Somers et al, 2019 trial separately, and averaging over all training de/en-coders (Crosse et al, 2016).…”
Section: Computational Models In Practicementioning
confidence: 99%
“…Past studies have established the feasibility of decoding auditory attention from both invasive 5,6,10,11 and non-invasive [13][14][15] neural recordings. Despite these advancements, existing studies predominantly employ overly simplistic acoustic scenes that do not mimic the real world scenarios 6,[10][11][12][13][14]16 . Common experimental setups have been limited to stationary talkers without background noise, and primarily focus on distinguishing between two concurrent talkers.…”
Section: Introductionmentioning
confidence: 99%