2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) 2015
DOI: 10.1109/fg.2015.7284860
|View full text |Cite
|
Sign up to set email alerts
|

Automatic 3D facial expression recognition using geometric and textured feature fusion

Abstract: Abstract-3D facial expression recognition has gained more and more interests from affective computing society due to issues such as pose variations and illumination changes caused by 2D imaging having been eliminated. There are many applications that can benefit from this research, such as medical applications involving the detection of pain and psychological effects in patients, in human-computer interaction tasks that intelligent systems use in today's world. In this paper, we look into 3D Facial Expression … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 22 publications
0
11
0
Order By: Relevance
“…It contains the combination of facial muscles for each expression [12], and can be used as a tool to detect the emotional state of a person through their face. Another approach to classify emotion through facial expressions is using local and holistic feature descriptors, such as in [13]. Unlike FACS, these techniques treat the whole face the same and look for patterns throughout, and not just for certain muscles.…”
mentioning
confidence: 99%
“…It contains the combination of facial muscles for each expression [12], and can be used as a tool to detect the emotional state of a person through their face. Another approach to classify emotion through facial expressions is using local and holistic feature descriptors, such as in [13]. Unlike FACS, these techniques treat the whole face the same and look for patterns throughout, and not just for certain muscles.…”
mentioning
confidence: 99%
“…Features are usually calculated on the region surrounding principal facial landmarks or on the mouth and eyes that inherently contain essential information for emotion recognition. These key features that are considered closely related to expression categories, in order to perform FER are fed to various classifiers, as well as Support-Vector Machines (SVM) [47][48][49][50][51], Adaboost, k-Nearest Neighbors (k-NN), Linear Discriminant Analysis (LDA), Modified Principal Component Analysis (PCA), Hidden Markov Model (HMM) [44][45][46], Random Forest [52] or Neural Networks [51,53,54].…”
Section: Feature-based Vs Model-based Algorithmsmentioning
confidence: 99%
“…Hence, a feature dimensionality reduction technique is used to reduce the size of the feature vector while retaining its quality. Jan and Meng [49] in 2015, proposed to fuse the key features obtained from the geometric and textured domains, to investigate how the overall performance is affected. Also in this work, a feature dimensionality reduction method is used, before applying machine learning techniques; merging the many elements produced by the algorithms can result in a large feature vector which can slow down the system.…”
Section: Multi-modal Algorithms Using 2d and 3d Datamentioning
confidence: 99%
“…3D FER has become an extensive field of research with many early attempts in [3], [4], [5], [6], [7] and most recent works in [8], [9], [10] that trend to use both 2D and 3D multi-modal data to further improve the accuracy. Huynh et al [11] proposed to use deep CNNs for classifying the six basic facial expressions.…”
Section: Related Workmentioning
confidence: 99%