2019
DOI: 10.1109/tpami.2018.2868350
|View full text |Cite
|
Sign up to set email alerts
|

Representation Learning by Rotating Your Faces

Abstract: The large pose discrepancy between two face images is one of the fundamental challenges in automatic face recognition. Conventional approaches to pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes a Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
88
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 136 publications
(88 citation statements)
references
References 58 publications
0
88
0
Order By: Relevance
“…(v) DR-GAN: Proposed by a team from Michigan State University, USA, the framework performs face detection and alignment on the input images using MT-CNN [33]. This is followed by feature extraction using the Disentangled Representation learning-Generative Adversarial Network (DR-GAN) [38]. Classification is performed using Cosine distance.…”
Section: (Iii) Deep Disguise Recognizer Network (Ddrnet) [27]mentioning
confidence: 99%
“…(v) DR-GAN: Proposed by a team from Michigan State University, USA, the framework performs face detection and alignment on the input images using MT-CNN [33]. This is followed by feature extraction using the Disentangled Representation learning-Generative Adversarial Network (DR-GAN) [38]. Classification is performed using Cosine distance.…”
Section: (Iii) Deep Disguise Recognizer Network (Ddrnet) [27]mentioning
confidence: 99%
“…In addition to adopting 3D convolutions to learn 3D features, during training, we introduce more bias about the 3D world by transforming these learnt features to random poses before projecting them to 2D images. This random pose transformation is crucial to guarantee that HoloGAN learns a 3D representation that is disentangled and can be rendered from all possible views, as also observed by Tran et al [55] in DR-GAN. However, HoloGAN performs explicit 3D rigid-body transformation, while DR-GAN performs this using an implicit vector representation.…”
Section: Learning With View-dependent Mappingsmentioning
confidence: 92%
“…We explore the effect of our proposed 3DMM on preserving identity when reconstructing face images. Using DR-GAN [48], a pretrained face recognition network, we can compute the cosine distance between the input and its reconstruction from different models. Fig.…”
Section: Identity-preservingmentioning
confidence: 99%