Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)
DOI: 10.1109/iccv.1998.710738
|View full text |Cite
|
Sign up to set email alerts
|

Separability of pose and expression in facial tracking and animation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
51
0
6

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 64 publications
(57 citation statements)
references
References 11 publications
0
51
0
6
Order By: Relevance
“…Zhiwei Zhu and Qiang [39] derived a technique based on Singular Value Decomposition (SVD) [13] which can recover 3D face pose and facial expression simultaneously from a monocular video sequence in real time. This is the concept envisaged in their paper on 'Robust Real Time Face Pose and Facial Expression Recovery'.…”
Section: D Face Pose Recovery and Facial Expressionmentioning
confidence: 99%
“…Zhiwei Zhu and Qiang [39] derived a technique based on Singular Value Decomposition (SVD) [13] which can recover 3D face pose and facial expression simultaneously from a monocular video sequence in real time. This is the concept envisaged in their paper on 'Robust Real Time Face Pose and Facial Expression Recovery'.…”
Section: D Face Pose Recovery and Facial Expressionmentioning
confidence: 99%
“…There are two main shortcomings of such an approach: firstly, it requires the set of interest points on each image to lie on static 3D surfaces of the scene; and secondly, the surfaces of the scene must be textured enough to allow interest points to be estimated. When we approach the problem of estimating 3D rigid facial motions from images, we find that the problem of estimating the rigid motion of a 3D surface with many instantaneous local deformations is usually due to local facial motions [1,5,8]. Furthermore, it is well known that the surface of the face is not textured enough.…”
Section: Introductionmentioning
confidence: 99%
“…Feature-based procedures were the first tracking algorithms to be introduced. They are based on tracking a discrete set of texture elements such as eye or nose corners and the contours of expressive regions (eyes, eyelids or mouth) [2,14]. They can only estimate the motion of textured regions and, therefore, they provide sparse information about the deformation of the face.…”
Section: Introductionmentioning
confidence: 99%