2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01010
|View full text |Cite
|
Sign up to set email alerts
|

R³ Adversarial Network for Cross Model Face Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 32 publications
(29 citation statements)
references
References 16 publications
0
29
0
Order By: Relevance
“…GANs are utilized in solving general face recognition problems as cross-age face recognition, face synthesis, poseinvariant face recognition, video-based face recognition, makeup-invariant face recognition, and so on. For example, R3AN architecture [85] was proposed for cross model FR problem. It divides the method into three paths: reconstruction, representation, and regression for training.…”
Section: Generative Adversarial Networkmentioning
confidence: 99%
“…GANs are utilized in solving general face recognition problems as cross-age face recognition, face synthesis, poseinvariant face recognition, video-based face recognition, makeup-invariant face recognition, and so on. For example, R3AN architecture [85] was proposed for cross model FR problem. It divides the method into three paths: reconstruction, representation, and regression for training.…”
Section: Generative Adversarial Networkmentioning
confidence: 99%
“…Cross-model compatibility: The broad goal of this area is to ensure embeddings generated by different models are compatible. Some recent works ensure cross-model compatibility by learning transformation functions from the query embedding space to the gallery one [33,5,13]. Different from these works, our approach directly optimizes the query model such that its metric space aligns with that of the gallery.…”
Section: Related Workmentioning
confidence: 99%
“…They present BCT, an algorithm that allows new embedding models to be compatible with old models through a joint training procedure involving a distillation loss. Other works [8,29,46] attempt to construct a unified representation space on which models are compatible. These procedures also modify training of individual models to ensure that they are easy to transform to this unified embedding space.…”
Section: Related Workmentioning
confidence: 99%