2020
DOI: 10.1109/access.2020.3032380
|View full text |Cite
|
Sign up to set email alerts
|

Discrimination of Genuine and Acted Emotional Expressions Using EEG Signal and Machine Learning

Abstract: We present here one of the first studies that attempt to differentiate between genuine and acted emotional expressions, using EEG data. We present the first EEG dataset (available here) with recordings of subjects with genuine and fake emotional expressions. We build our experimental paradigm for classification of smiles; genuine smiles, fake/acted smiles and neutral expression. We propose multiple methods to extract intrinsic features from three EEG emotional expressions; genuine, neutral, and fake/acted smil… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
32
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(34 citation statements)
references
References 45 publications
1
32
0
1
Order By: Relevance
“…In contrast, the studies that did not use this shared data set [ 65 , 218 , 219 , 225 , 226 , 227 , 228 , 229 , 230 ] all achieved higher classification performance by using SVM, KNN, and ANNs algorithms. For example, Seo et al [ 227 ] have compared different ML classifiers to classify boredom and non-boredom in 28 participants on the basis of the historical models of emotion and found that the KNN outperformed both RF and ANNs.…”
Section: Discussionmentioning
confidence: 94%
See 1 more Smart Citation
“…In contrast, the studies that did not use this shared data set [ 65 , 218 , 219 , 225 , 226 , 227 , 228 , 229 , 230 ] all achieved higher classification performance by using SVM, KNN, and ANNs algorithms. For example, Seo et al [ 227 ] have compared different ML classifiers to classify boredom and non-boredom in 28 participants on the basis of the historical models of emotion and found that the KNN outperformed both RF and ANNs.…”
Section: Discussionmentioning
confidence: 94%
“… 60 channels Own database PCA SVM Accuracy = 71.7 [ 251 ] 2007 MWL 7 subj. 6 channels Own database AR RBF-SVM Accuracy = 70 [ 229 ] 2020 ER 28 subj. 64 channels Own database DWT/EMD ANNs KNN SVM Accuracy = 94.3 [ 297 ] 2016 ERP 52 subj.…”
Section: Table A1mentioning
confidence: 99%
“…Từ khoá: dữ liệu hai chiều, dữ liệu ba chiều, điện não đồ, cảm xúc, mạng bộ nhớ dài ngắn hạn GIỚI THIỆU 1 Trong những năm gần đây, đã có rất nhiều các phương 2 pháp được sử dụng để trích xuất đặc trưng của tín 3 hiệu cảm xúc con người như là biểu hiện khuôn mặt 1 , 4 giọng nói 2,3 , nháy mắt 4 , hoặc sử dụng các tín hiệu 5 sinh lý. Khi so sánh các biểu hiện đã đề cập, chỉ có 6 biểu hiện sử dụng các tín hiệu sinh lý từ con người 7 là được các nhà nghiên cứu đánh giá cao về độ tin 8 cậy 5 , do việc sử dụng tín hiệu sinh lý đã được các nhà 9 nghiên cứu khẳng định rằng là khó có thể làm giả 6 .…”
Section: ứNg Dụng Thuật Toán Mạng Bộ Nhớ Dài Ngắn Hạn Trong Phân Loại Tín Hiệu Sóng Nãounclassified
“…We are using this emotion model in the study. We selected images from the Geneva Affective Picture Database (GAPED) image dataset [13], [14], as stimuli. These images are designed to elicit certain kind of emotional response from the subjects.…”
Section: Introductionmentioning
confidence: 99%
“…The main contribution of this work lies in the use of an automated feature extraction technique via multiple convolutional neural networks (CNN). In [13], the features were extracted manually using the discrete wavelet transform (DWT) and empirical mode decomposition (EMD). In addition, we have validated the proposed CNN networks on the raw data using multiple other techniques like long short-term memory (LSTM) networks, shallow artificial neural network (ANN), and support vector machines (SVM).…”
Section: Introductionmentioning
confidence: 99%