Proceedings of the 5th International Conference on Multimodal Interfaces - ICMI '03 2003
DOI: 10.1145/958468.958479
|View full text |Cite
|
Sign up to set email alerts
|

Real time facial expression recognition in video using support vector machines

Abstract: Enabling computer systems to recognize facial expressions and infer emotions from them in real time presents a challenging research topic. In this paper, we present a real time approach to emotion recognition through facial expression in live video. We employ an automatic facial feature tracker to perform face localization and feature extraction. The facial feature displacements in the video stream are used as input to a Support Vector Machine classifier. We evaluate our method in terms of recognition accuracy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
80
0
1

Year Published

2005
2005
2017
2017

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 142 publications
(81 citation statements)
references
References 11 publications
0
80
0
1
Order By: Relevance
“…Points based approaches use fiducial points for shape representation. Michel et al [33] use a tracker to get 22 fiducial points and calculate the distance of each point between a neutral and a peak frame. These distances are used as features of an Support Vector Machine (SVM) algorithm in order classify the emotions.…”
Section: Facial Emotion Recognition Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…Points based approaches use fiducial points for shape representation. Michel et al [33] use a tracker to get 22 fiducial points and calculate the distance of each point between a neutral and a peak frame. These distances are used as features of an Support Vector Machine (SVM) algorithm in order classify the emotions.…”
Section: Facial Emotion Recognition Approachesmentioning
confidence: 99%
“…Once the data was collected and preprocessed, the features were extracted. The spatio-temporal facial features studied were based on Michel et al work [33]. The distance of each coordinate to the nose point was measured.…”
Section: Emotion Recognition Using Fiducial Points and Eeg Quaternionmentioning
confidence: 99%
“…We used similar methodology developed by Wimmer et al [4] which combines multitude of qualitatively different features [19], determines the most relevant features using machine learning and learns objective functions from annotated images [18]. To extract discriptive features from the image, Michel et al [14] extracted the location of 22 feature points within the face and determine their motion between an image that shows the neutral state of the face and an image that represents a facial expression. The very similar approach of Cohn et al [15] uses hierarchical optical flow in order to determine the motion of 30 feature points.…”
Section: Related Workmentioning
confidence: 99%
“…This approach is applied by Kotsia et al [16] to design Support Vector Machines (SVM) for classification. Michel et al [14] train a Support Vector Machine (SVM) that determines the visible facial expression within the video sequences of the Cohn-Kanade Facial Expression Database by comparing the first frame with the neutral expression to the last frame with the peak expression. In order to perform face recognition applications many researchers have applied model based approaches.…”
Section: Related Workmentioning
confidence: 99%
“…The second step in the procedure consists in defining a decision/classification rule which associates the feature-based representation with the correct facial expression. Previous works used for this step several well-know methods: HMMbased classifier [12], template matching [18], SVM [15], Dynamic Bayesian Networks [9]. The standard hard classification approach associates any two examples having the same features to the same corresponding class.…”
Section: Introductionmentioning
confidence: 99%