2016
DOI: 10.1109/taffc.2015.2436926
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of EEG Signals and Facial Expressions for Continuous Emotion Detection

Abstract: Emotions are time varying affective phenomena that are elicited as a result of stimuli. Videos and movies in particular are made to elicit emotions in their audiences. Detecting the viewers' emotions instantaneously can be used to find the emotional traces of videos. In this paper, we present our approach in instantaneously detecting the emotions of video viewers' emotions from electroencephalogram (EEG) signals and facial expressions. A set of emotion inducing videos were shown to participants while their fac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
246
3

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 451 publications
(310 citation statements)
references
References 42 publications
2
246
3
Order By: Relevance
“…The proposed quaternion based features improves the overall results by more than 1%. The proposed facial features also provide better F1 scores than the ones used in [20] in most of the classification scenarios. On the other hand, the results of the combined features are not always consistent in terms of which combination is the best one.…”
Section: Resultsmentioning
confidence: 96%
See 4 more Smart Citations
“…The proposed quaternion based features improves the overall results by more than 1%. The proposed facial features also provide better F1 scores than the ones used in [20] in most of the classification scenarios. On the other hand, the results of the combined features are not always consistent in terms of which combination is the best one.…”
Section: Resultsmentioning
confidence: 96%
“…A leave one out approach and a k-fold cross validation is applied for all the participants in our database. These results are compared with the ones obtained using as features the suggested in [20]. Tables 2 and 3 show the F1 scores for all the modalities and both classifiers, SVM and gentleboost, respectively.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations