2019 9th International Conference on Cloud Computing, Data Science &Amp; Engineering (Confluence) 2019
DOI: 10.1109/confluence.2019.8776889
|View full text |Cite
|
Sign up to set email alerts
|

Multi-class time series classification of EEG signals with Recurrent Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 16 publications
0
8
0
Order By: Relevance
“…The purpose of this experiment is to evaluate the performance of the ASGCN model, compared with GRU (Dutta, 2019), CGCRN (Xu et al, 2019), Graph wavenet (Wu et al, 2019) models. Each experiment used the fNIRS data of whole brain region as the input of model.…”
Section: Identification Ability Comparison Of DI Erent Classification...mentioning
confidence: 99%
“…The purpose of this experiment is to evaluate the performance of the ASGCN model, compared with GRU (Dutta, 2019), CGCRN (Xu et al, 2019), Graph wavenet (Wu et al, 2019) models. Each experiment used the fNIRS data of whole brain region as the input of model.…”
Section: Identification Ability Comparison Of DI Erent Classification...mentioning
confidence: 99%
“…Recurrent neural networks (RNNs) [ 13 , 14 ] are the most commonly used method. Modelling of time-series data by an RNN considers the time correlation of data, which is reflected in the connection of nodes between hidden layers; that is, the input of the hidden layer includes not only the output of the input layer but also the output of the hidden layer at the previous time.…”
Section: Related Workmentioning
confidence: 99%
“…To maintain the memory and dependence on the data, RNN’s variants, LSTM and the GRU were proposed in turn. Dutta [ 13 ], compared the simple RNN, LSTM, and GRU with EEG signal data. As the number of layers increases, although it takes longer, the accuracy of the latter two is significantly higher than that of the former.…”
Section: Related Workmentioning
confidence: 99%
“…are used as feature extractor based on their energy compaction, using recurrence plots, based on dictionary learning and sparse representations, using Random forest (RF) classifier, using different layers of CNN, transfer learning, using deep batch normalization, using neural memory networks, etc. are tried out with different researcher using cleaned and raw data from different datasets [30][31][32][33][34][35][36][37][38][39][40][41][42]. Slowly real time clinical diagnostics is tried out [43].…”
Section: Introductionmentioning
confidence: 99%