2018
DOI: 10.3390/s18051383
|View full text |Cite
|
Sign up to set email alerts
|

Electroencephalography Based Fusion Two-Dimensional (2D)-Convolution Neural Networks (CNN) Model for Emotion Recognition System

Abstract: The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extract… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
81
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 166 publications
(90 citation statements)
references
References 37 publications
0
81
0
Order By: Relevance
“…All experiments are carried out within the two emotion dimensions of arousal and valence. (30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45) bands. In addition, the frequency-domain features also include the difference of average PSD (4 power differences × 14 channel pairs) in theta, alpha, beta, and gamma bands for 14 EEG channel pairs (Fp2-Fp1, AF4-AF3, F4-F3, F8-F7, FC6-FC5, FC2-FC1, C4-C3, T8-T7, CP6-CP5, CP2-CP1, P4-P3, P8-P7, PO4-PO3, and O2-O1) between the right and left scalps.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…All experiments are carried out within the two emotion dimensions of arousal and valence. (30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45) bands. In addition, the frequency-domain features also include the difference of average PSD (4 power differences × 14 channel pairs) in theta, alpha, beta, and gamma bands for 14 EEG channel pairs (Fp2-Fp1, AF4-AF3, F4-F3, F8-F7, FC6-FC5, FC2-FC1, C4-C3, T8-T7, CP6-CP5, CP2-CP1, P4-P3, P8-P7, PO4-PO3, and O2-O1) between the right and left scalps.…”
Section: Resultsmentioning
confidence: 99%
“…Wang and Shang [31] presented an emotion recognition method based on deep belief networks (DBNs). Kwon et al [32] used fusion features extracted from EEG signals and galvanic skin response signals and convolution neural networks (CNNs) for emotion recognition. In our previous work, an emotion recognition method based on an improved deep belief network with glia chains (DBN-GC) and multiple domain EEG features was proposed [23].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Hand motion recognition [9][10][11][12][13][14][15][16][17], Muscle activity recognition [18][19][20][21][22][23] ECG Heartbeat signal classification , Heart disease classification [49][50][51][52][53][54][55][56][57][58][59][60][61][62][63], Sleep-stage classification [64][65][66][67][68], Emotion classification [69], age and gender prediction [70] EEG Brain functionality classification , Brain disease classification , Emotion classification [122][123][124][125][126][127][128][129], Sleep-stage classification [130][131][132][133]…”
Section: Emgmentioning
confidence: 99%
“…The impurity of this algorithm is when having traces of one class division into others, therefore the classification rate is low. In [20] was used as an electroencephalogram (EEG) and galvanic skin response (GSR) signals to emotion recognition. A zero-crossing rate is used for pre-processing the GSR single.…”
Section: Introductionmentioning
confidence: 99%