2019
DOI: 10.3389/fnins.2019.00153
|View full text |Cite
|
Sign up to set email alerts
|

A Tutorial on Auditory Attention Identification Methods

Abstract: Auditory attention identification methods attempt to identify the sound source of a listener's interest by analyzing measurements of electrophysiological data. We present a tutorial on the numerous techniques that have been developed in recent decades, and we present an overview of current trends in multivariate correlation-based and model-based learning frameworks. The focus is on the use of linear relations between electrophysiological and audio data. The way in which these relations are computed differs. Fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
75
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 65 publications
(75 citation statements)
references
References 94 publications
(120 reference statements)
0
75
0
Order By: Relevance
“…Identifying the degree and direction of attention in near realtime requires that this information can be extracted from short time intervals. Several studies have shown that attention can be reliably decoded from single-trial EEG data in the two competing speaker paradigm (Horton et al, 2014;Mirkovic et al, 2015Mirkovic et al, , 2016O'Sullivan et al, 2015;Biesmans et al, 2017;Fiedler et al, 2017;Fuglsang et al, 2017Fuglsang et al, , 2020Haghighi et al, 2017) using various auditory attention decoding (AAD) methods (for a review see: Alickovic et al, 2019). In these studies, AAD procedures demonstrated above chance-level accuracy for evaluation periods of time ranging from 2 to 60 s. In a neurofeedback application, features should be obtained as quickly as possible.…”
Section: Introductionmentioning
confidence: 99%
“…Identifying the degree and direction of attention in near realtime requires that this information can be extracted from short time intervals. Several studies have shown that attention can be reliably decoded from single-trial EEG data in the two competing speaker paradigm (Horton et al, 2014;Mirkovic et al, 2015Mirkovic et al, , 2016O'Sullivan et al, 2015;Biesmans et al, 2017;Fiedler et al, 2017;Fuglsang et al, 2017Fuglsang et al, , 2020Haghighi et al, 2017) using various auditory attention decoding (AAD) methods (for a review see: Alickovic et al, 2019). In these studies, AAD procedures demonstrated above chance-level accuracy for evaluation periods of time ranging from 2 to 60 s. In a neurofeedback application, features should be obtained as quickly as possible.…”
Section: Introductionmentioning
confidence: 99%
“…Several recent magnetoencephalographic and electroencephalographic (EEG) studies have shown that neural responses during auditory selectivity tasks correlate more strongly with attended than with ignored speech (e.g., Ding and Simon, 2012 ; Horton et al, 2013 ; O’Sullivan et al, 2015 ). Auditory attention decoding models have been established to describe the relationship between continuous speech and ongoing cortical recordings ( Alickovic et al, 2019 ). The linear temporal response function (TRF) model ( Crosse et al, 2016 ) has been used widely to predict EEG responses to speech (i.e., the encoding model; e.g., Di Liberto et al, 2015 ) and to reconstruct speech from associated EEG signals (i.e., the decoding model; e.g., Ding and Simon, 2012 ; Mirkovic et al, 2015 ; Teoh and Lalor, 2019 ) using off-line regression techniques.…”
Section: Introductionmentioning
confidence: 99%
“…As the relative intensity of background interference may affect the quality of neural tracking of attended speech ( Alickovic et al, 2019 ), the impacts of various SNR conditions on auditory attention decoding in realistic scenarios should also be considered. Generally, low SNRs interfere with attended speech segregation, and the quality of neural tracking of attended speech declines with increasing noise intensity ( Kong et al, 2014 ; Das et al, 2018 ).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To illustrate why and how the MESD is useful in the evaluation of AAD algorithms, we apply it to an illustrative example in which we compare two variants of the MMSE decoder for AAD as proposed in [6] and [10], respectively. 1) Description of the two variants: Given a training set of M data windows, in the first variant of [6] (also adopted in, e.g., [8]), per-window (corresponding to decision window length τ ) decoders are computed, after which the M decoders are averaged to obtain one final decoder. The second variant of [10] (also adopted in, e.g., [12], [17], [18]), first averages the M per-window autocorrelation matrices (or equivalently: the windows are all concatenated) to train a single decoder across all training windows simultaneously.…”
Section: B Illustrative Example: Mesd-based Performance Evaluationmentioning
confidence: 99%