2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 2021
DOI: 10.1109/iccvw54120.2021.00458
|View full text |Cite
|
Sign up to set email alerts
|

Rethinking Common Assumptions to Mitigate Racial Bias in Face Recognition Datasets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(11 citation statements)
references
References 25 publications
0
4
0
Order By: Relevance
“…A not-expected approach is seen as the researchers claim that training only on one race is not inherently disadvantageous [102]. They demonstrate that by training only using African faces, they achieved less skew across race than by training with a balanced dataset.…”
Section: Network Improvements On the Rfw Datasetmentioning
confidence: 99%
“…A not-expected approach is seen as the researchers claim that training only on one race is not inherently disadvantageous [102]. They demonstrate that by training only using African faces, they achieved less skew across race than by training with a balanced dataset.…”
Section: Network Improvements On the Rfw Datasetmentioning
confidence: 99%
“…An alternative road is to use data augmentation techniques to "rebalance" the dataset [27,45]. However, it was discovered that it is not sufficient for avoiding bias to use an assumed balanced datasets during training [20,49,50] because it is often unclear which features in the data need to be balanced. Approaches for curating or manipulating the dataset require information on the target domain, i.e., one needs to set requirements on the dataset depending on the desired operational context [6,16,22].…”
Section: Related Literaturementioning
confidence: 99%
“…The authors leveraged synthetic data for analysis and showcased how facial pose and facial identity cannot be completely disentangled by deep networks (bias in model training Figure 2(c)). To further study the impact of dataset bias, (Gwilliam et al 2021) analyzed facial recognition performance by training on various imbalanced distributions across race. They observed less biased model predictions after training on a specific subgroup than training on a balanced distribution.…”
Section: Bias In Face Detection and Recognitionmentioning
confidence: 99%