2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.188
|View full text |Cite
|
Sign up to set email alerts
|

FacePoseNet: Making a Case for Landmark-Free Face Alignment

Abstract: We show how a simple convolutional neural network (CNN) can be trained to accurately and robustly regress 6 degrees of freedom (6DoF) 3D head pose, directly from image intensities. We further explain how this FacePoseNet (FPN) can be used to align faces in 2D and 3D as an alternative to explicit facial landmark detection for these tasks. We claim that in many cases the standard means of measuring landmark detector accuracy can be misleading when comparing different face alignments. Instead, we compare our FPN … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
72
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 125 publications
(73 citation statements)
references
References 54 publications
1
72
0
Order By: Relevance
“…The texture of the objects is also randomly changed. Finally, the 3D pose of the 3D generic model respect to the face image is estimated using [5] and then used to render the object and register its occlusion mask. Face Recognition Engine: We employ two state-of-the-art face recognition engines as a proxy to measure the impact of face completion to the final recognition accuracy.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…The texture of the objects is also randomly changed. Finally, the 3D pose of the 3D generic model respect to the face image is estimated using [5] and then used to render the object and register its occlusion mask. Face Recognition Engine: We employ two state-of-the-art face recognition engines as a proxy to measure the impact of face completion to the final recognition accuracy.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…For 1:N face identification, we report the results by using TPIR vs. FPIR (equivalent to a DET curve) and Rank-N ( Table 5 and Figure 9 (b)). We compare three proposed models with VGGFace2 [2], Face-PoseNet (FPN) [3], Comparator Net [38], and PRN [14]. Similarity to evaluation on the IJB-A, all performance evaluations are based on the squared L 2 distance threshold.…”
Section: Ablation Studymentioning
confidence: 99%
“…At present, there is no specifically designed neural networks for point cloud registration. FacePoseNet [4] directly regressed 6DoF transform parameters between the generic 3D facial keypoints model and the keypionts on intensity images. [14] proposed a network to estimate camera poses from monocular images in quaternion form.…”
Section: Related Workmentioning
confidence: 99%