2021
DOI: 10.7717/peerj.10629
|View full text |Cite
|
Sign up to set email alerts
|

Associations between self-reported and objective face recognition abilities are only evident in above- and below-average recognisers

Abstract: The 20-Item Prosopagnosia Items (PI-20) was recently introduced as a self-report measure of face recognition abilities and as an instrument to help the diagnosis of prosopagnosia. In general, studies using this questionnaire have shown that observers have moderate to strong insights into their face recognition abilities. However, it remains unknown whether these insights are equivalent for the whole range of face recognition abilities. The present study investigates this issue using the Mandarin version of the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
13
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 18 publications
(14 citation statements)
references
References 40 publications
0
13
1
Order By: Relevance
“…Thus, if this is assumed, the full-view advantage would reflect the contribution of holistic processing to face matching, with higher values indicating a stronger reliance in holistic processing. It has been argued that, in contrast to face recognition (Rossion, 2013;Wong et al, 2021), face matching might rely on a more featural processing mode (Megreya, 2018;Megreya & Bindemann, 2018;Megreya & Burton, 2006;Towler et al, 2017). Remarkably, we found that higher performance in face matching was associated with stronger mask effects, suggesting not only that holistic processing is important for face matching, but also that stronger holistic processing is associated with better performance in face matching.…”
Section: Discussioncontrasting
confidence: 45%
“…Thus, if this is assumed, the full-view advantage would reflect the contribution of holistic processing to face matching, with higher values indicating a stronger reliance in holistic processing. It has been argued that, in contrast to face recognition (Rossion, 2013;Wong et al, 2021), face matching might rely on a more featural processing mode (Megreya, 2018;Megreya & Bindemann, 2018;Megreya & Burton, 2006;Towler et al, 2017). Remarkably, we found that higher performance in face matching was associated with stronger mask effects, suggesting not only that holistic processing is important for face matching, but also that stronger holistic processing is associated with better performance in face matching.…”
Section: Discussioncontrasting
confidence: 45%
“…In other words, if some observers can consistently identify masked faces, this would indicate that face matching is, in principle, solvable even when only the top part of the face is visible. Given the low validity of self-reported measures of face identification (Bobak et al, 2019; Estudillo, 2021; Estudillo & Wong, 2021; Palermo et al, 2017), such an objective individual differences approach would help to screen those observers with superior face matching performance (Bruce et al, 2018; Ramon et al, 2019).…”
mentioning
confidence: 99%
“…Indeed, research has shown that unfamiliar face matching skills present substantial individual differences across observers, with some individuals performing at chance levels while others performing at ceiling levels (Bruce et al, 2018 ; Burton et al, 2010 ; Estudillo & Bindemann, 2014 ; Estudillo et al, 2021 ; McCaffery et al, 2018 ). Thus, this account highlights the importance of using objective face identification tasks during personnel selection for those applied settings whereby the identification of others is demanded (Bobak et al, 2016 ; Estudillo, 2021 ;Estudillo & Wong, 2021 ; Fysh et al, 2020 ; Ramon et al, 2019 ; Robertson et al, 2016 ). In addition, according to the data limit account, a large variance of errors in face matching can be explained by the properties of the face stimuli (Estudillo & Bindemann, 2014 ; Fysh & Bindemann, 2017 ).…”
Section: Introductionmentioning
confidence: 99%