2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01821
|View full text |Cite
|
Sign up to set email alerts
|

FaceFormer: Speech-Driven 3D Facial Animation with Transformers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
49
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 114 publications
(50 citation statements)
references
References 9 publications
0
49
0
1
Order By: Relevance
“…Evaluation of mouth synchronization. We first followed the lip synchronization metrics used in [18] and [22] to assess the quality of lip movements. By defining the maximum L2 error for all lip vertices as the lip error per frame.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
See 4 more Smart Citations
“…Evaluation of mouth synchronization. We first followed the lip synchronization metrics used in [18] and [22] to assess the quality of lip movements. By defining the maximum L2 error for all lip vertices as the lip error per frame.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…In this way audio and facial motion models can be aligned by cross-modal multiheaded attention with bias [18].…”
Section: Voice Encodermentioning
confidence: 99%
See 3 more Smart Citations