2016
DOI: 10.1016/j.imavis.2016.04.017
|View full text |Cite
|
Sign up to set email alerts
|

Improving facial analysis and performance driven animation through disentangling identity and expression

Abstract: We present techniques for improving performance driven facial animation, emotion recognition, and facial key-point or landmark prediction using learned identity invariant representations. Established approaches to these problems can work well if sufficient examples and labels for a particular identity are available and factors of variation are highly controlled. However, labeled examples of facial expressions, emotions and key-points for new individuals are difficult and costly to obtain. In this paper we impr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 59 publications
0
1
0
Order By: Relevance
“…There is no accurate evaluation standard for the quality of expression feature extraction methods, because the accuracy of features depends on specific problems and applied scenes. For facial expression images, the feature extraction methods based on geometric [40] and apparent features [41], [44]- [47] are both conventional. Expression classification and recognition is the last step of the FER system; however, it is a key step.…”
Section: B Facial Feature Extraction and Recognitionmentioning
confidence: 99%
“…There is no accurate evaluation standard for the quality of expression feature extraction methods, because the accuracy of features depends on specific problems and applied scenes. For facial expression images, the feature extraction methods based on geometric [40] and apparent features [41], [44]- [47] are both conventional. Expression classification and recognition is the last step of the FER system; however, it is a key step.…”
Section: B Facial Feature Extraction and Recognitionmentioning
confidence: 99%