2019
DOI: 10.1109/tcyb.2017.2788081
|View full text |Cite
|
Sign up to set email alerts
|

Spatial–Temporal Recurrent Neural Network for Emotion Recognition

Abstract: In this paper, we propose a novel deep learning framework, called spatial-temporal recurrent neural network (STRNN), to integrate the feature learning from both spatial and temporal information of signal sources into a unified spatial-temporal dependency model. In STRNN, to capture those spatially co-occurrent variations of human emotions, a multidirectional recurrent neural network (RNN) layer is employed to capture long-range contextual cues by traversing the spatial regions of each temporal slice along diff… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
193
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 430 publications
(194 citation statements)
references
References 36 publications
0
193
1
Order By: Relevance
“…2). All achieved accuracy improvements over handcrafted approaches, but a lack of training data was also mentioned [145], [146]. For example, Yanagimoto and Sugimoto [145] divided the raw 16-channel EEG data into 1s segments and used a seven-layer CNN with 10 ms kernels on the first layer, leading to accuracy improvements of over 20%.…”
Section: Learning Spatial Features From Physiologymentioning
confidence: 99%
See 1 more Smart Citation
“…2). All achieved accuracy improvements over handcrafted approaches, but a lack of training data was also mentioned [145], [146]. For example, Yanagimoto and Sugimoto [145] divided the raw 16-channel EEG data into 1s segments and used a seven-layer CNN with 10 ms kernels on the first layer, leading to accuracy improvements of over 20%.…”
Section: Learning Spatial Features From Physiologymentioning
confidence: 99%
“…5). RNNs have been used to learn temporal context from EEG features to improve recognition accuracies [146], [55]. Brady et al [18] found that learning temporal context with LSTM and handcrafted features leads to improvements over shallow baseline models.…”
Section: Learning Temporal Features From Physiological Datamentioning
confidence: 99%
“…These features are theta (4-8 Hz), slow alpha (8-10 Hz), alpha (8-12 Hz), beta (12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30), and gamma (30+ Hz), spectral power for 32 electrodes, and the difference between the spectral powers of all the symmetrical pairs of electrodes. For feature elimination, Fisher's linear discriminant was used and the Gaussian naive Baye's is used for the classification.…”
Section: Related Workmentioning
confidence: 99%
“…Their RNN consists of fully connected two LSTM layers, a dropout layer, and a dense layer. Zhang et al [15] presented a deep learning framework called spatiotemporal recurrent neural network (STRNN) in order to combine the learning of spatiotemporal features for emotion recognition using the SJTU Emotion EEG Dataset (SEED).…”
Section: Related Workmentioning
confidence: 99%
“…Tong et.al. [9], proposed a novel deep learning framework to recognize emotion states for SEED database using differential entropy feature which is calculated for five bands. Their algorithm classified for 4 kinds of emotion states like anger, happiness, sadness and surprise with an accuracy rate of more than 90%.…”
Section: Related Workmentioning
confidence: 99%