2018
DOI: 10.48550/arxiv.1804.10322
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Classification of auditory stimuli from EEG signals with a regulated recurrent neural network reservoir

Abstract: The use of electroencephalogram (EEG) as the main input signal in brain-machine interfaces has been widely proposed due to the non-invasive nature of the EEG. Here we are specifically interested in interfaces that extract information from the auditory system and more specifically in the task of classifying heard speech from EEGs. To do so, we propose to limit the preprocessing of the EEGs and use machine learning approaches to automatically extract their meaningful characteristics. More specifically, we use a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…The raw input representation showed to not give the best results in our deep CNN regressors but it did remain well above the chance performance rate of 10%. Meanwhile, the mel-spectrogram worked better across both the raw and PSD input representations with performance comparable to studies where only EEG classification was conducted [13,17,18]. Classifying outputs to their stimuli classes reveals fidelity of semantically relevant reconstructions.…”
Section: Representationsmentioning
confidence: 61%
See 1 more Smart Citation
“…The raw input representation showed to not give the best results in our deep CNN regressors but it did remain well above the chance performance rate of 10%. Meanwhile, the mel-spectrogram worked better across both the raw and PSD input representations with performance comparable to studies where only EEG classification was conducted [13,17,18]. Classifying outputs to their stimuli classes reveals fidelity of semantically relevant reconstructions.…”
Section: Representationsmentioning
confidence: 61%
“…The first four minutes of all recordings were used and cut up into five second chunks. To balance the train and test Other EEG studies also keep generalization within participants [5,13], and a recent study has shown that weak correlations across participants in the NMED-T could be why generalizing to unseen participants is difficult without many recorded participants in the dataset [14].…”
Section: Model and Trainingmentioning
confidence: 99%
“…Many LSM users implement a non-spiking readout layer W O using machine learning methods [13,[21][22][23][24] or even n-layers formal neural networks [25]. In this work, we use a single formal layer with a softmax activation function as the output layer as we focus on the reservoir component of the LSM.…”
Section: Related Workmentioning
confidence: 99%
“…While W R is not trained, many authors include unsupervised neuroplasticity rules [22,23,[26][27][28] as a way to either keep biological realism or to provide a higher computational performance. Similarly, several studies looked at the initialization of W R in combination with various topologies such as small-world [27,[29][30][31] and scale-free networks [31,32].…”
Section: Optimization Of the Reservoirmentioning
confidence: 99%
“…Using RNN with LSTM architectures could take temporal dependence into account for EEG timeseries signals and could achieve an average classification accuracy of 93.0% [131]. In the research of applying RNN to auditory stimulus classification, [132] used a regulated RNN reservoir to classify three English vowels a, u and i. Their result showed an average accuracy rate of 83.2% with the RNN approach outperformed a deep neural network method.…”
Section: Recurrent Neural Network (Rnn) and Long Short-term Memory (L...mentioning
confidence: 99%