2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019
DOI: 10.1109/cvprw.2019.00279
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing and Reducing the Damage of Dataset Bias to Face Recognition With Synthetic Data

Abstract: It is well known that deep learning approaches to face recognition suffer from various biases in the available training data. In this work, we demonstrate the large potential of synthetic data for analyzing and reducing the negative effects of dataset bias on deep face recognition systems. In particular we explore two complementary application areas for synthetic face images: 1) Using fully annotated synthetic face images we can study the face recognition rate as a function of interpretable parameters such as … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
79
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 124 publications
(79 citation statements)
references
References 19 publications
0
79
0
Order By: Relevance
“…Computer vision datasets are often found to be biased [64,76]. Human face datasets are particularly scrutinized [2,20,43,45,46,54] because methods and models trained on these data can end up being biased along attributes that are protected by the law [44]. Approaches to mitigating dataset bias include collecting more thorough examples [54], using image synthesis to compensate for distribution gaps [46], and example resampling [48].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Computer vision datasets are often found to be biased [64,76]. Human face datasets are particularly scrutinized [2,20,43,45,46,54] because methods and models trained on these data can end up being biased along attributes that are protected by the law [44]. Approaches to mitigating dataset bias include collecting more thorough examples [54], using image synthesis to compensate for distribution gaps [46], and example resampling [48].…”
Section: Related Workmentioning
confidence: 99%
“…The machine learning community is active in analyzing biases of learning models, and how one may train models where bias is mitigated [3,14,18,31,33,41,46,51,68], usually by ensuring that performance is equal across certain subgroups of a dataset. Here we ask a complementary question: we assume that the system to be benchmarked is pre-trained and fixed, and we ask how to reliably measure algorithmic bias in pre-trained black-box algorithms.…”
Section: Related Workmentioning
confidence: 99%
“…In this context, it was shown that 9 out of 10 2D algorithms in the Face Recognition Vendor Test 2002 [Phillips et al 2003] improved considerably when combined with a 3DMM for face frontalization [Blanz et al 2005]. Other applications of 3DMMs include augmenting real-world data in 3D [Masi et al 2016] and the generation of synthetic data for training [Kortylewski et al 2018b; and for analyzing the effects of dataset bias on face recognition systems [Kortylewski et al 2018a[Kortylewski et al , 2019.…”
Section: Face Recognitionmentioning
confidence: 99%
“…Fair data generation with GANs may help diversify datasets used in computer vision algorithms (Xu et al 2018). For example, StyleGAN2 (Karras et al 2019) is able to produce high-quality images of non-existing human faces and has proven to be especially useful in creating diverse datasets of human faces, something that many algorithmic systems for facial recognition currently lack (Obermeyer et al 2019;Kortylewski et al 2019;Harwell 2020).…”
Section: Misguided Evidence Leading To Unwanted Biasmentioning
confidence: 99%