People make trait inferences based on facial appearance despite little evidence that these inferences accurately reflect personality. The authors tested the hypothesis that these inferences are driven in part by structural resemblance to emotional expressions. The authors first had participants judge emotionally neutral faces on a set of trait dimensions. The authors then submitted the face images to a Bayesian network classifier trained to detect emotional expressions. By using a classifier, the authors can show that neutral faces perceived to possess various personality traits contain objective resemblance to emotional expression. In general, neutral faces that are perceived to have positive valence resemble happiness, faces that are perceived to have negative valence resemble disgust and fear, and faces that are perceived to be threatening resemble anger. These results support the idea that trait inferences are in part the result of an overgeneralization of emotion recognition systems. Under this hypothesis, emotion recognition systems, which typically extract accurate information about a person's emotional state, are engaged during the perception of neutral faces that bear subtle resemblance to emotional expressions. These emotions could then be misattributed as traits.
Top-down attention is an essential cognitive ability, allowing our finite brains to process complex natural environments by prioritizing information relevant to our goals. Previous evidence suggests that top-down attention operates by modulating stimulus-evoked neural activity within visual areas specialized for processing goal-relevant information. We show that top-down attention also has a separate influence on the background coupling between visual areas: adopting different attentional goals resulted in specific patterns of noise correlations in the visual system, whereby intrinsic activity in the same set of low-level areas was shared with only those high-level areas relevant to the current goal. These changes occurred independently of evoked activity, persisted without visual stimulation, and predicted behavioral success in deploying attention better than the modulation of evoked activity. This attentional switching of background connectivity suggests that attention may help synchronize different levels of the visual processing hierarchy, forming statedependent functional pathways in human visual cortex to prioritize goal-relevant information.category selectivity | functional MRI | goal-directed attention | retinotopic occipital cortex | ventral temporal cortex T he ventral visual stream, the neural substrate of object perception (1), is organized hierarchically. At early stages, occipital cortex decomposes visual images into simple features, such as form and orientation (2). At later stages, ventral temporal cortex combines these features into complex objects, such as faces and scenes (3). Although this hierarchy is hard-wired (4), human vision is flexible: our goals and intentions determine what we see via top-down attention (5). How does top-down attention prioritize goal-relevant information in the ventral visual stream?The conventional answer is that attention prioritizes certain information by enhancing evoked responses in cortical areas that represent this information. For example, when attending to faces (e.g., when looking for a friend in a crowd, or in our study, monitoring for a repeated face in a stream of composite images that contain both a face and a distracting scene), the response of the fusiform face area (FFA) to faces is enhanced; in contrast, when attending to scenes (e.g., when looking for a restaurant in a new town, or in our study, monitoring for a repeated scene in the composite images), the response of the parahippocampal place area (PPA) to scenes is enhanced (6). This attentional modulation is interpreted as resulting from top-down selection of goal-relevant information and relative inhibition of goal-irrelevant information. Similar effects have been observed throughout visual cortex and with diverse methodologies, including positron emission tomography (7), functional magnetic resonance imaging (fMRI) (6,8), and single-cell recordings (9, 10). By strengthening representations, top-down attention may ensure that goal-relevant information competes better against goal-irrelevant in...
Neuroscience research on the social evaluation of faces has accumulated over the last decade, yielding divergent results. We used a meta-analytic technique, multi-level kernel density analysis (MKDA), to analyze 29 neuroimaging studies on face evaluation. Across negative face evaluations, we observed the most consistent activations in bilateral amygdala. Across positive face evaluations, we observed the most consistent activations in medial prefrontal cortex, pregenual anterior cingulate cortex (pgACC), medial orbitofrontal cortex (mOFC), left caudate and nucleus accumbens (NAcc). Based on additional analyses comparing linear and non-linear responses, we propose a ventral/dorsal dissociation within the amygdala, wherein separate populations of neurons code for face valence and intensity, respectively. Finally, we argue that some of the differences between studies are attributable to differences in the typicality of face stimuli. Specifically, extremely attractive faces are more likely to elicit responses in NAcc/caudate and mOFC.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.