Neurons in the human medial temporal lobe (MTL) that are selective for the identity of specific people are classically thought to encode identity invariant to visual features. However, it remains largely unknown how visual information from higher visual cortex is translated into a semantic representation of an individual person. Here, we show that some MTL neurons are selective to multiple different face identities on the basis of shared features that form clusters in the representation of a deep neural network trained to recognize faces. Contrary to prevailing views, we find that these neurons represent an individual’s face with feature-based encoding, rather than through association with concepts. The response of feature neurons did not depend on face identity nor face familiarity, and the region of feature space to which they are tuned predicted their response to new face stimuli. Our results provide critical evidence bridging the perception-driven representation of facial features in the higher visual cortex and the memory-driven representation of semantics in the MTL, which may form the basis for declarative memory.
Faces are salient social stimuli that attract a stereotypical pattern of eye movement. The human amygdala and hippocampus are involved in various aspects of face processing; however, it remains unclear how they encode the content of fixations when viewing faces. To answer this question, we employed single-neuron recordings with simultaneous eye tracking when participants viewed natural face stimuli. We found a class of neurons in the human amygdala and hippocampus that encoded salient facial features such as the eyes and mouth. With a control experiment using non-face stimuli, we further showed that feature selectivity was specific to faces. We also found another population of neurons that differentiated saccades to the eyes vs. the mouth. Population decoding confirmed our results and further revealed the temporal dynamics of face feature coding. Interestingly, we found that the amygdala and hippocampus played different roles in encoding facial features. Lastly, we revealed two functional roles of feature-selective neurons: 1) they encoded the salient region for face recognition, and 2) they were related to perceived social trait judgments. Together, our results link eye movement with neural face processing and provide important mechanistic insights for human face perception.
Autism spectrum disorder (ASD) is characterized by difficulties in social processes, interactions, and communication. Yet, the neurocognitive bases underlying these difficulties are unclear. Here, we triangulated the ‘trans-diagnostic’ approach to personality, social trait judgments of faces, and neurophysiology to investigate (1) the relative position of autistic traits in a comprehensive social-affective personality space, and (2) the distinct associations between the social-affective personality dimensions and social trait judgment from faces in individuals with ASD and neurotypical individuals. We collected personality and facial judgment data from a large sample of online participants (N = 89 self-identified ASD; N = 307 neurotypical controls). Factor analysis with 33 subscales of 10 social-affective personality questionnaires identified a 4-dimensional personality space. This analysis revealed that ASD and control participants did not differ significantly along the personality dimensions of empathy and prosociality, antisociality, or social agreeableness. However, the ASD participants exhibited a weaker association between prosocial personality dimensions and judgments of facial trustworthiness and warmth than the control participants. Neurophysiological data also indicated that ASD participants had a weaker association with neuronal representations for trustworthiness and warmth from faces. These results suggest that the atypical association between social-affective personality and social trait judgment from faces may contribute to the social and affective difficulties associated with ASD.
An important question in human face perception research is to understand whether neural representation of faces is dynamically modulated by context. In particular, although there is a plethora of neuroimaging literature that has probed the neural representation of faces, few studies have investigated what low-level structural and textural facial features parametrically drive neural responses to faces and whether the representation of these features is modulated by the task. To answer these questions, we employed two task instructions when participants viewed the same faces. We first identified brain regions that parametrically encoded high-level social traits such as perceived facial trustworthiness and dominance, and we showed that these brain regions were modulated by task instructions. We then employed a data-driven computational face model with parametrically generated faces and identified brain regions that encoded low-level variation in the faces (shape and skin texture) that drove neural responses. We further analyzed the evolution of the neural feature vectors along the visual processing stream and visualized and explained these feature vectors. Together, our results showed a flexible neural representation of faces for both low-level features and high-level social traits in the human brain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.