Proceedings of the 2018 2nd International Conference on Computer Science and Artificial Intelligence 2018
DOI: 10.1145/3297156.3297177
|View full text |Cite
|
Sign up to set email alerts
|

Emotion Classification Using EEG Signals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
30
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(31 citation statements)
references
References 8 publications
1
30
0
Order By: Relevance
“…As described earlier, such result is only slightly better than random chance, and since the designed system took in all 40 of the channels of the epoch data as the features, the discrepancy and errors were more, as other channels did not change much throughout the 60 s videos. According to [34,35], it is suggested that by using PCA they were able to find the channels that gave the best possible results, which also is aligned with the channels "F3, C3, F4, C4, AF3, PO4, CP1" as crucial in obtaining the best possible results. These channels indicated that the bottom left hemisphere of the brain was responsible for triggering emotional states.…”
Section: Resultsmentioning
confidence: 99%
“…As described earlier, such result is only slightly better than random chance, and since the designed system took in all 40 of the channels of the epoch data as the features, the discrepancy and errors were more, as other channels did not change much throughout the 60 s videos. According to [34,35], it is suggested that by using PCA they were able to find the channels that gave the best possible results, which also is aligned with the channels "F3, C3, F4, C4, AF3, PO4, CP1" as crucial in obtaining the best possible results. These channels indicated that the bottom left hemisphere of the brain was responsible for triggering emotional states.…”
Section: Resultsmentioning
confidence: 99%
“…EEG signals also have a relatively low signal-to-noise ratio (SNR) and are susceptible to distortion from artificial interference (e.g., eye movement) [28]. To remove these artifacts and make EEG signals more correlated to the target events, EEG signals are usually calculated in five frequency bands, i.e., delta (1-3 Hz), theta (4-7 Hz), alpha (8)(9)(10)(11)(12)(13), beta (14-30 Hz), and gamma (31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45)(46)(47)(48)(49)(50) [27]. Useful features can be extracted from these five frequency bands of EEG signals for detailed analysis on specific tasks [29,30].…”
Section: Characteristics Of Eeg Signalsmentioning
confidence: 99%
“…To compare the effectiveness of the examined features from different TW lengths, six classical classifiers were used for emotion recognition, including KNN, LR, SVM, GNB, Multilayer Perceptron (MLP), and Bootstrap Aggregating (Bagging). These classifiers are the most frequently ones used with high accuracies and strong adaptabilities to different classification tasks [15,42,43]. A machine learning module in Python called sklearn was used to construct models, and the relevant parameter settings are listed in Table 3.…”
Section: Extracting Features Based On Twmentioning
confidence: 99%
“…Neural Network [31][32][33] 85.80% Support Vector Machine [34][35][36] 77.80% K-Nearest Neighbor [33,37,38] 88.94% Multi-layer Perceptron [38][39][40] 78.16% Bayes [41][42][43] 69.62% Extreme Learning Machine [41] 87.10% K-Means [43] 78.06% Linear Discriminant Analysis [42] 71.30% Gaussian Process [44] 71.30% To improve on the performance of the SOA methods, we used the features generated by M3GP in Figure 15. This kind of transfer learning was used in [45] with success and the best training transformation found in M3GP was used to transform the dataset into a new one (M3GP tree in Section 4.2), considering that these new features contain more information to simplify the learning process of the SOA methods.…”
Section: Classifier Average Performancementioning
confidence: 99%