2019
DOI: 10.1109/tpami.2018.2858819
|View full text |Cite
|
Sign up to set email alerts
|

3D-Aided Dual-Agent GANs for Unconstrained Face Recognition

Abstract: In this paper, we propose a Dual-Agent Generative Adversarial Network (DA-GAN) model, which can improve realism of a face simulator's output using unlabeled real faces, while preserving identity information during realism refinement. The dual agents are specifically designed for distinguishing real v.s. fake and identities simultaneously. In particular, we employ an off-the-shelf 3D face model as a simulator to generate profile face images with varying poses. DA-GAN leverages a FCN as the generator and an auto… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 112 publications
(53 citation statements)
references
References 38 publications
0
53
0
Order By: Relevance
“…Recent developments of deep‐learning models create opportunities to employ sequence‐to‐sequence models for automatic conversations (Li et al, ; Li, Galley, Brockett, Gao, & Dolan, ), deep reinforcement learning for controlling account behaviors (He et al, ; Mnih et al, ; Serban et al, ), and generative adversarial networks for creating fake profile images and content (Goodfellow et al, ; Zhao et al, ). Some of these methodologies have already demonstrated a capability to surpass human performance on various tasks, such as face recognition and game playing (Mnih et al, ; Taigman, Yang, Ranzato, & Wolf, ).…”
Section: Discussion and Perspectivesmentioning
confidence: 99%
“…Recent developments of deep‐learning models create opportunities to employ sequence‐to‐sequence models for automatic conversations (Li et al, ; Li, Galley, Brockett, Gao, & Dolan, ), deep reinforcement learning for controlling account behaviors (He et al, ; Mnih et al, ; Serban et al, ), and generative adversarial networks for creating fake profile images and content (Goodfellow et al, ; Zhao et al, ). Some of these methodologies have already demonstrated a capability to surpass human performance on various tasks, such as face recognition and game playing (Mnih et al, ; Taigman, Yang, Ranzato, & Wolf, ).…”
Section: Discussion and Perspectivesmentioning
confidence: 99%
“…Third, model C shows a consistently higher accuracy than model B by the improvement of 1.3-4.5% TAR at FAR = 0.001-0.1 in the verification task, 3.3-7.3% TPIR at FPIR = 0.01-0.1 in the identification open set task, and 1.5% for rank-1 in the identification close set task. Last, although model C is trained from scratch, it outperformed the state-of-the-art method (DA-GAN [42]) by 0.7-1.9% TAR at FAR = 0.001-0.1 in the verification task, 2.2% for Rank-1 on identification close set task, and 5.2% for TPIR at FPIR = 0.01 in identification open set task on the IJB-A dataset. This validates the effectiveness of the proposed AFRN with the pair selection on the large-scale and challenging unconstrained face recognition.…”
Section: Ablation Studymentioning
confidence: 99%
“…From the experimental results (Table 4 and Figure 8), we have the following observations. First, compared to model A, model B achieves a consistently superior accuracies (TAR and TPIR) by 0.4-0.9% for TAR at FAR=0.001- Pose-Aware Models [21] 0.652 ± 0.037 0.826 ± 0.018 ---0.840 ± 0.012 0.925 ± 0.008 0.946 ± 0.005 All-in-One [27] 0.823 ± 0.02 0.922 ± 0.01 0.976 ± 0.004 0.792 ± 0.02 0.887 ± 0.014 0.947 ± 0.008 0.988 ± 0.003 0.986 ± 0.003 NAN [39] 0.881 ± 0.011 0.941 ± 0.008 0.978 ± 0.003 0.817 ± 0.041 0.917 ± 0.009 0.958 ± 0.005 0.980 ± 0.005 0.986 ± 0.003 VGGFace2 [2] 0.904 ± 0.020 0.958 ± 0.004 0.985 ± 0.002 0.847 ± 0.051 0.930 ± 0.007 0.981 ± 0.003 0.994 ± 0.002 0.996 ± 0.001 VGGFace2 ft [2] 0.921 ± 0.014 0.968 ± 0.006 0.990 ± 0.002 0.883 ± 0.038 0.946 ± 0.004 0.982 ± 0.004 0.993 ± 0.002 0.994 ± 0.001 PRN [14] 0.901 ± 0.014 0.950 ± 0.006 0.985 ± 0.002 0.861 ± 0.038 0.931 ± 0.004 0.976 ± 0.003 0.992 ± 0.003 0.994 ± 0.003 PRN + [14] 0.919 ± 0.013 0.965 ± 0.004 0.988 ± 0.002 0.882 ± 0.038 0.941 ± 0.004 0.982 ± 0.004 0.992 ± 0.002 0.995 ± 0.001 DR-GAN [34] 0.539 ± 0.043 0.774 ± 0.027 ---0.855 ± 0.015 0.947 ± 0.011 -DREAM [1] 0.868 ± 0.015 0.944 ± 0.009 ---0.946 ± 0.011 0.968 ± 0.010 -DA-GAN [42] 0.930 ± 0.005 0.976 ± 0.007 0.991 ± 0.003 0.890 ± 0.039 0.949 ± 0.009 0.971 ± 0.007 0.989 ± 0.003 - 0.1 in verification task, 1.2-2.6% for TPIR at FPIR=0.01 and 0.1 in identification open set task, and 0.6% for Rank-1 in identification close set task. Second, model C shows a consistently higher accuracy than model A by the improvement of 1.8-5.4% TAR at FAR = 0.001-0.1 in the verification task, 4.5-9.9% TPIR at FPIR = 0.01-0.1 in the identification open set task, and 1.8% Rank-1 in the identification close set task.…”
Section: Ablation Studymentioning
confidence: 99%
“…Especially, GANs are known to be able to generate realistic samples, while the discriminator and the generator play a "two-player minimax game". Generating new type data using GANs and augementing with real data has been investigated in recent works (Baek, Kim, and Kim 2018;Gecer et al 2018;Zhang et al 2018;Shmelkov, Schmid, and Alahari 2018;Zhao et al 2018b;Tran, Yin, and Liu 2017;Zhao et al 2018a;Huang et al 2017) and too few to mention. In this paper, we try to investigate methods and tricks to sub-sample instead of randomly augmenting the synthetic images from GAN.…”
Section: Related Workmentioning
confidence: 99%