DOI: 10.1007/978-3-540-72849-8_6
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Facial Expression Recognition for Natural Interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Publication Types

Select...
4
2
2

Relationship

3
5

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 7 publications
0
10
0
Order By: Relevance
“…The emotional classification system in this ontology database is based on the further improvement of six categories of emotional classification system of Ekman. [14][15][16] Finally, the emotional category "good" was added to the noumenon of vocabulary, and the praise emotion in Chinese was divided into 7 categories (pleasure, good, anger, sadness, fear, disgust, and surprise) and 21 more detailed categories (including praise, boredom, derogation, hatred, etc.). There are seven types of parts of speech in emotional vocabulary ontology: noun, verb, adj, adv, nw, idiom, and prep.…”
Section: Construction Of Emotional Words Dictionarymentioning
confidence: 99%
“…The emotional classification system in this ontology database is based on the further improvement of six categories of emotional classification system of Ekman. [14][15][16] Finally, the emotional category "good" was added to the noumenon of vocabulary, and the praise emotion in Chinese was divided into 7 categories (pleasure, good, anger, sadness, fear, disgust, and surprise) and 21 more detailed categories (including praise, boredom, derogation, hatred, etc.). There are seven types of parts of speech in emotional vocabulary ontology: noun, verb, adj, adv, nw, idiom, and prep.…”
Section: Construction Of Emotional Words Dictionarymentioning
confidence: 99%
“…Besides recognizing the user's facial expressions with a webcam (Cerezo et al, 2007) new modules have being added to the system for the detection of affective information. In particular, affective cues are extracted from user's typed-in text, and right now a keyboard pulses and mouse clicks analyzer is being developed to detect states of boredom, confusion, frustration and nervousness of the user.…”
Section: Maxine: An Animation Engine For Multimodal Emotional Communimentioning
confidence: 99%
“…The images used in Experiment 1, plus the expressions for: gratification, hope, liking, pride, relief, remorse, resentment, and shame were validated using an automatic recognizer developed by Cerezo et al 16 The method studies the variation of a certain number of face parameters (distances and angles between some feature points of the face) with respect to the neutral expression. The characteristic points are based on the points defined on the MPEG-4 standard.…”
Section: Objective Evaluation By Automatic Recognizermentioning
confidence: 99%