2019 International Conference on Biometrics (ICB) 2019
DOI: 10.1109/icb45273.2019.8987334
|View full text |Cite
|
Sign up to set email alerts
|

The Harms of Demographic Bias in Deep Face Recognition Research

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(20 citation statements)
references
References 15 publications
0
18
0
Order By: Relevance
“…We can also generate cross-ethnic faces (as participant 030 in Figure 6 ) to increase the diversity of ME dataset, which primarily consists of participants from one particular country and ethnic background [ 38 ]. It is known that demographic imbalance of the training dataset will cause external biases on the trained models [ 39 , 40 , 41 ] that results in inaccurate and erroneous predictions. With the ability to generate wide range of participants with different demographics, this can alleviate the issue.…”
Section: Resultsmentioning
confidence: 99%
“…We can also generate cross-ethnic faces (as participant 030 in Figure 6 ) to increase the diversity of ME dataset, which primarily consists of participants from one particular country and ethnic background [ 38 ]. It is known that demographic imbalance of the training dataset will cause external biases on the trained models [ 39 , 40 , 41 ] that results in inaccurate and erroneous predictions. With the ability to generate wide range of participants with different demographics, this can alleviate the issue.…”
Section: Resultsmentioning
confidence: 99%
“…Ethnicity is a salient example, with findings indicating variability in face recognition algorithms for faces of different ethnicities (O'Toole et al, 2012), with studies demonstrating poorer performance on different demographic cohorts (Klare et al, 2012). This issue has received significant attention in computer vision (Abdurrahim et al, 2018;Garcia et al, 2019), and is an active area of research (Wang & Deng, 2020). While face recognition networks are comprised of multiple steps, the algorithms that find and place landmarks may be a generator of these biases.…”
Section: Study Two -Testing For Potential Biases In Automatic Landmarmentioning
confidence: 99%
“…Identifying and quantifying the amount of bias in FR technology are the initial step toward less-biased FR. Garcia et al showed that face matching confidence of FR models correlates with gender and ethnicity, thus revealing demographic bias [37]. Cavazos et al demonstrated that different thresholds are needed to equalize false accept rates (FARs) and the recognition accuracy [38].…”
Section: Related Workmentioning
confidence: 99%