2017
DOI: 10.1007/s00371-017-1428-z
|View full text |Cite
|
Sign up to set email alerts
|

Review on the effects of age, gender, and race demographics on automatic face recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
23
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 60 publications
(25 citation statements)
references
References 56 publications
1
23
0
1
Order By: Relevance
“…Despite our use of professional‐grade lighting equipment, Black children were more likely to be missing some FaceReader data. This finding is consistent with other reports of automated facial detection analysis (see Abdurrahim, Samad & Huddin, 2018). 3…”
Section: Resultssupporting
confidence: 94%
“…Despite our use of professional‐grade lighting equipment, Black children were more likely to be missing some FaceReader data. This finding is consistent with other reports of automated facial detection analysis (see Abdurrahim, Samad & Huddin, 2018). 3…”
Section: Resultssupporting
confidence: 94%
“…Ethnicity is a salient example, with findings indicating variability in face recognition algorithms for faces of different ethnicities (O'Toole et al, 2012), with studies demonstrating poorer performance on different demographic cohorts (Klare et al, 2012). This issue has received significant attention in computer vision (Abdurrahim et al, 2018;Garcia et al, 2019), and is an active area of research (Wang & Deng, 2020). While face recognition networks are comprised of multiple steps, the algorithms that find and place landmarks may be a generator of these biases.…”
Section: Study Two -Testing For Potential Biases In Automatic Landmarmentioning
confidence: 99%
“…With this problem definition in terms of users' matching behaviour, the biometric menagerie implies that there are 'inherent differences in the "recognisability" of different users' (Jain et al 2011, p. 22). An increasing number of studies thus locate the potential causes of differential recognizability of users in their 'demographic' characteristics (see Abdurrahim et al 2017). In a recent article, two biometric experts examine how what they term 'certain intrinsic properties of the subject, such as their ethnicity, gender and eye colour' (Howard and Etter 2013, p. 627) influence the distribution of errors in iris recognition systems.…”
Section: Identifying and Classifying 'Problem User Groups'mentioning
confidence: 99%