2021
DOI: 10.1016/j.apacoust.2020.107826
|View full text |Cite
|
Sign up to set email alerts
|

Supervised binaural source separation using auditory attention detection in realistic scenarios

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(17 citation statements)
references
References 45 publications
0
17
0
Order By: Relevance
“…Figure 9 compares the performances of the proposed AAD method based on the optimal feature set (i.e., mean GFP + RR") and GCQL classifier beside the baseline systems in terms of ACC measures. According to the accuracy criteria, the introduced AAD algorithm has superior proficiency than the baseline systems including "O'Sullivan et al [16]", "Lu et al [25]", "Ciccarelli et al [20]", "Geirnaert et al [26]", and 'Zakeri et al' [27]. It is observed that the accuracy of the baseline systems is increased with the increasing length of EEG durations, in general.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Figure 9 compares the performances of the proposed AAD method based on the optimal feature set (i.e., mean GFP + RR") and GCQL classifier beside the baseline systems in terms of ACC measures. According to the accuracy criteria, the introduced AAD algorithm has superior proficiency than the baseline systems including "O'Sullivan et al [16]", "Lu et al [25]", "Ciccarelli et al [20]", "Geirnaert et al [26]", and 'Zakeri et al' [27]. It is observed that the accuracy of the baseline systems is increased with the increasing length of EEG durations, in general.…”
Section: Discussionmentioning
confidence: 99%
“…In contrast to these approaches, the informative features technique does not require clean auditory stimuli and this characteristic makes it applicable in real-life conditions suchlike a cocktail party. Many features derived from EEG were exploited for auditory attention classification [23][24][25][26][27]. Although many researchers have introduced various features for attention detection, such features could not resolve inconsistencies or ambiguities in EEG interpretations.…”
Section: Introductionmentioning
confidence: 99%
“…This function is very suitable for processing and predicting events with long intervals and delays in time-series. Furthermore, the classification, forecasting, signal processing, and pattern recognition for highdimensional data are prominent applications of LSTM [182], [183]. As Fig.…”
Section: ) Recurrent Neural Network (Rnn)mentioning
confidence: 99%
“…Bi-LSTM, as an extension of the traditional LSTM [33], is trained on the input sequence, with two LSTMs set up in reverse order (see Figure 1). The LSTM layer reduces the vanishing gradient problem and allows the use of deeper networks compared with recurrent neural networks (RNNs) [34,35]. In the structure of the traditional RNN and the LSTM model, the propagation of information happens in a forward path, in which case the time depends only on the information before the time .…”
Section: Bidirectional Long Short-term Memory (Bi-lstm)mentioning
confidence: 99%