2019
DOI: 10.6028/nist.ir.8280
|View full text |Cite
|
Sign up to set email alerts
|

Face recognition vendor test part 3:

Abstract: This is the third in a series of reports on ongoing face recognition vendor tests (FRVT) executed by the National Institute of Standards and Technology (NIST). The first two reports cover, respectively, the performance of one-to-one face recognition algorithms used for verification of asserted identities, and performance of one-to-many face recognition algorithms used for identification of individuals in photo data bases. This document extends those evaluations to document accuracy variations across demographi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
122
0
4

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 240 publications
(132 citation statements)
references
References 17 publications
6
122
0
4
Order By: Relevance
“…But NIST also confirmed what Buolamwini and Gebru's gender-classification work suggested: most packages tended to be more accurate for white, male faces than for people of colour or for women 5 . In particular, faces classified in NIST's database as African American or Asian were 10-100 times more likely to be misidentified than those classified as white.…”
Section: More Accurate But Still Biasedsupporting
confidence: 54%
“…But NIST also confirmed what Buolamwini and Gebru's gender-classification work suggested: most packages tended to be more accurate for white, male faces than for people of colour or for women 5 . In particular, faces classified in NIST's database as African American or Asian were 10-100 times more likely to be misidentified than those classified as white.…”
Section: More Accurate But Still Biasedsupporting
confidence: 54%
“…It is important to note that our data do not speak to the issue of whether CNNs show differential performance between races or sexes (see [ 17 ] for a recent survey). What we are reporting is that the networks are relatively blind to variations on these dimensions that humans regard as highly salient.…”
Section: Resultsmentioning
confidence: 99%
“…The biases may be implicit or explicit, and can be the result of the individuals who wrote the algorithms or the data from which the algorithm was trained (Tomer, 2019). For example, facial recognition AI has been found to contain racial bias (Grother et al, 2019). It is unlikely that biases can fully be eliminated from Empathetic (and other) AI systems.…”
Section: Empathetic Intelligencementioning
confidence: 99%