2020
DOI: 10.1109/access.2020.3010311
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Fused Emotion Recognition About Expression-EEG Interaction and Collaboration Using Deep Learning

Abstract: The proposed emotion recognition model is based on the hierarchical long-short term memory neural network (LSTM) for video-electroencephalogram (Video-EEG) signal interaction. The inputs are facial-video and EEG signals from the subjects when they are watching the emotion-stimulated video. The outputs are the corresponding emotion recognition results. Facial-video features and corresponding EEG features are extracted based on a fully connected neural network (FC) at each time point. These features are fused th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(10 citation statements)
references
References 24 publications
0
10
0
Order By: Relevance
“…Depth picture sequences have shown to be very valuable in the development of quick 3D human skeletal joint estimation. Many depth sensors with high sample rates and inexpensive prices have lately been launched, owing to the fast development of depthsensing technology [8]. Table 1 gives some common depth sensor information.…”
Section: Recognition Of Limb Movement Characteristics In Competitive ...mentioning
confidence: 99%
“…Depth picture sequences have shown to be very valuable in the development of quick 3D human skeletal joint estimation. Many depth sensors with high sample rates and inexpensive prices have lately been launched, owing to the fast development of depthsensing technology [8]. Table 1 gives some common depth sensor information.…”
Section: Recognition Of Limb Movement Characteristics In Competitive ...mentioning
confidence: 99%
“…1) Effects of different signals (textual, audio, visual, or physiological) on unimodal affect recognition [152,194,244,289,327,331]; 2) Effects of modality combinations and fusion strategies on multimodal affective analysis [37,371,375,383,388,400]; 3) Effects of ML-based techniques [127,184,249,305,326] or DL-based methods [146,203,273,283,316,335] on affective computing; 4) Effects of some potential factors (e.g., released databases and performance metrics) on affective computing; 5) Applications of affective computing in real-life scenarios.…”
Section: Discussionmentioning
confidence: 99%
“…Soleymani et al [399] designed a framework of video-EEG based emotion detection using an LSTM-RNN and continuous conditional random fields. Wu et al [400] proposed a hierarchical LSTM with a self-attention mechanism to fuse the facial features and EEG features to calculate the final emotion. Yin et al [397] proposed an efficient end-to-end framework of EDA-music fused emotion recognition, denominating it as a 1-D residual temporal and channel attention network (RTCAN-1D).…”
Section: Physical-physiological Modality Fusion For Affective Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Feature extraction and fusion are the key steps in multimodal emotion recognition [3]. Li et al [4] proposed a hierarchical modular neural network and applied it to multimodal emotion recognition.…”
Section: Introductionmentioning
confidence: 99%