2018
DOI: 10.1111/cogs.12633
|View full text |Cite
|
Sign up to set email alerts
|

Human–Computer Interaction in Face Matching

Abstract: Automatic facial recognition is becoming increasingly ubiquitous in security contexts such as passport control. Currently, Automated Border Crossing (ABC) systems in the United Kingdom (UK) and the European Union (EU) require supervision from a human operator who validates correct identity judgments and overrules incorrect decisions. As the accuracy of this human–computer interaction remains unknown, this research investigated how human validation is impacted by a priori face‐matching decisions such as those m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

6
62
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 34 publications
(68 citation statements)
references
References 43 publications
6
62
0
Order By: Relevance
“…Dowsett and colleagues documented beneficial effects of group decision-making for individuals working in pairs on face matching tasks, however, pair performance was limited by the performance of the best performing individual or fell below expectations if each individual's errors were independent [12]. Fysh and Bindemann showed that identity labels (same, different, or unresolved) provided together with face pairs modulated face matching accuracy [13]. The authors theorized that labels may have drawn attention away from the face stimuli, increasing accuracy when label information was correct, but decreasing it when label information was incorrect.…”
Section: Prior Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Dowsett and colleagues documented beneficial effects of group decision-making for individuals working in pairs on face matching tasks, however, pair performance was limited by the performance of the best performing individual or fell below expectations if each individual's errors were independent [12]. Fysh and Bindemann showed that identity labels (same, different, or unresolved) provided together with face pairs modulated face matching accuracy [13]. The authors theorized that labels may have drawn attention away from the face stimuli, increasing accuracy when label information was correct, but decreasing it when label information was incorrect.…”
Section: Prior Workmentioning
confidence: 99%
“…Do the human performance numbers as reported in [7,14,15] and others improve when algorithm outcomes are provided? Fysh and Bindemann [13] conducted preliminary research on this and showed identity labels indeed shifted human determinations but postulated this was because of decreased attention to the face matching task. We reproduce their results but implement a signal detection theory framework to show that they are not due to decreased attention but instead arise because the algorithm information introduces a cognitive bias that shifts the human's perception of face similarity.…”
Section: Prior Workmentioning
confidence: 99%
“…Presenting faces in the context of photo-identity documents, for example, biases observers towards making match decisions, thereby reducing detection of mismatches ( Feng & Burton, 2019 ; McCaffery & Burton 2016 ). And face-matching decisions can be biased either towards match or mismatch responses by simple onscreen labels that suggest which type of face pair might be shown, mimicking information provided by Automated Border Control systems, even when observers are instructed to ignore this information ( Fysh & Bindemann, 2018a ).…”
mentioning
confidence: 99%
“…Perhaps for this reason, these systems continue to be monitored in practical settings by humans who are responsible for verifying correct decisions made by these systems, whilst simultaneously overruling cases where the system has made an incorrect judgement [28]. Current research suggests that human observers cannot reliably detect instances where the system has made an inaccurate identification [58], implying that algorithms bias the identity judgements of humans. This means that for the foreseeable future, the final identification decision in real world settings will continue to reside with the human observer.…”
Section: Possible Solutionsmentioning
confidence: 99%