2013
DOI: 10.7763/ijmo.2013.v3.247
|View full text |Cite
|
Sign up to set email alerts
|

Auto-Optimized Multimodal Expression Recognition Framework Using 3D Kinect Data for ASD Therapeutic Aid

Abstract: Abstract-This paper concerns the automatic recognition of human facial expressions using a fast 3D sensor, such as the Kinect. Facial expressions represent a rich source of information regarding emotion and interpersonal communication. The ability to recognize expressions automatically will have a large impact in many areas, particularly human-computer interaction. This paper describes 2 frameworks for recognizing 6 basic expressions using 3-dimensional data sequences that are captured in real time.Results are… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(11 citation statements)
references
References 9 publications
0
11
0
Order By: Relevance
“…The system proposed by (Youssef, Aly, Ibrahim, & Abbott, 2013) attempts to recognize the six basic emotions using a Microsoft Kinect sensor. At first, the system extracts 4D facial points (dynamic 3D facial points).…”
Section: Affect Sensing -Systems and Devicesmentioning
confidence: 99%
“…The system proposed by (Youssef, Aly, Ibrahim, & Abbott, 2013) attempts to recognize the six basic emotions using a Microsoft Kinect sensor. At first, the system extracts 4D facial points (dynamic 3D facial points).…”
Section: Affect Sensing -Systems and Devicesmentioning
confidence: 99%
“…A small number of studies have used the Kinect depth sensor for face applications. These include the recognition of basic facial expressions [14] and the classification of different facial exercises [15]. In U.K., ongoing research uses the Kinect depth sensor for facial rehabilitation of patients following stroke [16].…”
Section: Introductionmentioning
confidence: 99%
“…Using these two features respectively, the classification of emotions has been done using support vector machine (SVM) classifiers and the recognition results of 30 consecutive frames are fused by the fusion algorithm based on improved emotional profiles (IEPs). In 2013, Youssef et al [10] constructed a dataset containing 3D data for 14 different persons performing the 6 basic facial expressions. SVM and k-NN are used to classify emotions.…”
Section: Introductionmentioning
confidence: 99%