2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2012
DOI: 10.1109/icsmc.2012.6378149
|View full text |Cite
|
Sign up to set email alerts
|

Classification of affects using head movement, skin color features and physiological signals

Abstract: Abstract-The automated detection of emotions opens the possibility to new applications in areas such as education, mental health and entertainment. There is an increasing interest on detection techniques that combine multiple modalities. In this study, we introduce automated techniques to detect users' affective states from a fusion model of facial videos and physiological measures. The natural behavior expressed on faces and their physiological responses were recorded from subjects (N=20) while they viewed im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…The best result for unimodal fusion was obtained using feature-level fusion and also there is an improvement in the processing time compared to existing methods. The authors (Monkaresi, Hussain, & Calvo, 2012) have combined the head movement, physiology, and facial color using early fusion for the affect classification. It has achieved statistically higher accuracy compared to the unimodal approaches.…”
Section: Feature Level or Early Fusionmentioning
confidence: 99%
“…The best result for unimodal fusion was obtained using feature-level fusion and also there is an improvement in the processing time compared to existing methods. The authors (Monkaresi, Hussain, & Calvo, 2012) have combined the head movement, physiology, and facial color using early fusion for the affect classification. It has achieved statistically higher accuracy compared to the unimodal approaches.…”
Section: Feature Level or Early Fusionmentioning
confidence: 99%
“…The statistical data is stored according to objects color, eye color, eye movement, the position of eyes on the head, head sideways. Various recorded video files can also be analyzed using MATLAB and OpenCV [11]. In this article various geometric and chromatic features were used as image-based features.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The presentation method is standardized such that all individuals have the same viewing experience. For example, the images could each be presented for ten seconds on a computer screen that is a fixed distance from where the individual is sitting, with a constant screen resolution, screen brightness, and image size (Monkaresi, Hussain, & Calvo, 2012). The images themselves can be selected on the basis of which emotions should be elicited from a database of standardized images.…”
Section: Emotional Imagesmentioning
confidence: 99%
“…They used this data to train unimodal and multimodal emotion detectors using EEG and peripheral physiological signals. Leon et al (2007) and Monkaresi et al (2012) followed similar protocols, but collected self-reports after every IAPS image, instead of after sets of images.…”
Section: Emotional Imagesmentioning
confidence: 99%
See 1 more Smart Citation