2022
DOI: 10.1101/2022.10.16.512398
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep learning algorithms reveal a new visual-semantic representation of familiar faces in human perception and memory

Abstract: Recent studies show significant similarities between the representations humans and deep neural networks (DNNs) generate for faces. However, two critical aspects of human face recognition are overlooked by these networks. First, human face recognition is mostly concerned with familiar faces, which are encoded by visual and semantic information, while current DNNs solely rely on visual information. Second, humans represent familiar faces in memory, but representational similarities with DNNs were only investiga… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(3 citation statements)
references
References 43 publications
0
3
0
Order By: Relevance
“…Training DNNs initially on blurred images also provided insights into the potential advantage of the initial low acuity of infants' vision (Vogelsang et al, 2018). Such and many other modifications (e.g., multi-modal self-supervised image-language training, in the way DNNs are built and trained may generate perceptual effects that are more human-like (Shoham, Grosbard, Patashnik, Cohen-Or, & Yovel, 2022). Yet even current DNNs can advance our understanding of the nature of the high-level representations that are required for face and object recognition (Abudarham, Grosbard, & Yovel, 2021;Hill et al, 2019), which are still undefined in current neural and cognitive models.…”
Section: Introductionmentioning
confidence: 99%
“…Training DNNs initially on blurred images also provided insights into the potential advantage of the initial low acuity of infants' vision (Vogelsang et al, 2018). Such and many other modifications (e.g., multi-modal self-supervised image-language training, in the way DNNs are built and trained may generate perceptual effects that are more human-like (Shoham, Grosbard, Patashnik, Cohen-Or, & Yovel, 2022). Yet even current DNNs can advance our understanding of the nature of the high-level representations that are required for face and object recognition (Abudarham, Grosbard, & Yovel, 2021;Hill et al, 2019), which are still undefined in current neural and cognitive models.…”
Section: Introductionmentioning
confidence: 99%
“…Further, object-trained networks are currently the best model of face-specific neural responses in the primate brain ( 16 , 17 ) and even appear to contain units selectively responsive to faces ( 18 , 19 ). A third possibility is that none of the above training regimes might be able to capture all classic signatures of human face perception, and something else might be required, such as a face-specific inductive bias ( 20 , 21 ) or a higher-level semantic processing of faces ( 22 ), to capture human behavioral signatures of face processing. Finally, these hypotheses are not mutually exclusive, and it is possible that different signatures of human face processing may result from optimization for different tasks.…”
mentioning
confidence: 99%
“…Training DNNs initially on blurred images also provided insights into the potential advantage of the initial low acuity of infants’ vision (Vogelsang et al, 2018). Such and many other modifications (e.g., multi-modal self-supervised image-language training, Radford et al, 2021) in the way DNNs are built and trained may generate perceptual effects that are more human-like (Shoham, Grosbard, Patashnik, Cohen-Or, & Yovel, 2022). Yet even current DNNs can advance our understanding of the nature of the high-level representations that are required for face and object recognition (Abudarham, Grosbard, & Yovel, 2021; Hill et al, 2019), which are still undefined in current neural and cognitive models.…”
mentioning
confidence: 99%