According to predictive models of emotion, people use previous experience to construct affective predictions, represented multimodally in the brain. We do not live in a stable world, however. Some environments are uncertain, whereas others are not. In two experiments we investigated how experiencing previous certain versus uncertain contingencies shaped subjective reactions to future affective stimuli, within and across sensory modalities. Two S1-S2 paradigms were used as learning and test phases. S1s were colored circles, S2s negative/neutral affective pictures or sounds. During the learning phase, participants (N = 192, 179) were assigned to the certain (CG) or uncertain group (UG) and presented with 100% (CG) or 50% (UG) S1-S2 congruency between visual stimuli. During the test phase, participants were presented with a new 75% S1-S2 paradigm and visual (Experiment 1) or auditory (Experiment 2) S2s. Participants were asked to rate the expected valence of upcoming S2s (expectancy ratings) or valence and arousal to S2s. In both experiments, the CG reported more extreme expectancy ratings than the UG, suggesting that experiencing previous reliable S1-S2 associations led CG participants to subsequently predict similar associations. No group differences emerged on valence and arousal ratings, which were more prominently influenced by the new 75% contingencies of the test phase rather than by previous learned contingencies. Last, comparing the two experiments, no significant group by experiment interaction was found, supporting the hypothesis of cross-modality generalization at the subjective level. Overall, our results advance knowledge about the mechanisms by which previous learned contingencies shape subjective affective experience.
Several previous studies have interfered with the observer’s facial mimicry during a variety of facial expression recognition tasks providing evidence in favor of the role of facial mimicry and sensorimotor activity in emotion processing. In this theoretical context, a particularly intriguing facet has been neglected, namely whether blocking facial mimicry modulates conscious perception of facial expressions of emotions. To address this issue, we used a binocular rivalry paradigm, in which two dissimilar stimuli presented to the two eyes alternatingly dominate conscious perception. On each trial, female participants (N = 32) were exposed to a rivalrous pair of a neutral and a happy expression of the same individual through anaglyph glasses in two conditions: in one, they could freely use their facial mimicry, in the other they had to keep a chopstick between their lips, constraining the mobility of the zygomatic muscle and producing ‘noise’ for sensorimotor simulation. We found that blocking facial mimicry affected the perceptual dominance in terms of cumulative time favoring neutral faces, but it did not change the time before the first dominance was established. Taken together, our results open a door to future investigation of the intersection between sensorimotor simulation models and conscious perception of emotional facial expressions.
Visual working memory (VWM) is one of the most investigated cognitive systems functioning as a hub between low- and high-level processes. Remarkably, its role in human cognitive architecture makes it a stage of crucial importance for the study of socio-affective cognition, also in relation with psychopathology such as anxiety. Among socio-affective stimuli, faces occupy a place of first importance. How faces and facial expressions are encoded and maintained in VWM is the focus of this review. Within the main theoretical VWM models, we will review research comparing VWM representations of faces and of other classes of stimuli. We will further present previous work investigating if and how both static (i.e., ethnicity, trustworthiness and identity) and changeable (i.e., facial expressions) facial features are represented in VWM. Finally, we will examine research showing qualitative differences in VWM for face representations as a function of psychopathology and personality traits. The findings that we will review are not always coherent with each other, and for this reason we will highlight the main methodological differences as the main source of inconsistency. Finally, we will provide some suggestions for future research in this field in order to foster our understanding of representation of faces in VWM and its potential role in supporting socio-affective cognition.
According to predictive models of emotion, people use their previous experience to construct new affective predictions. We do not live in a stable world, however. Some environments are uncertain, others are not. This study investigated how experiencing certain vs. uncertain probabilistic relationships shapes subjective reactions to new affective stimuli, within and across sensory modalities. Two S1-S2 paradigms were employed as learning and test phases in two experiments. S1s were colored circles, S2s negative or neutral affective pictures or sounds. Participants (N = 192, 179) were assigned to the certain (CG) or uncertain group (UG), and they were presented with 100% (CG) or 50% (UG) S1-S2 congruency between visual stimuli during the learning phase. During the test phase both groups were presented with a new S1-S2 paradigm with a 75% S1-S2 congruency, and visual (Experiment 1) or auditory (Experiment 2) S2s. Participants were asked to rate the expected valence of upcoming S2s (expectancy ratings), or their experienced valence and arousal (valence and arousal ratings). In both experiments, participants in the CG reported more negative expectancy ratings after the S1s previously paired with negative stimuli, whereas no group differences emerged on valence and arousal ratings. Furthermore, when comparing the two experiments, no significant group by experiment interaction was found. Overall, and in line with predictive models, our results suggest that relying on a certain previous experience shapes subjective expectancies toward a coherent labeling of the predicted valence of future stimuli, and that this process develops similarly across sensory modalities.
With the advent of the severe acute respiratory syndrome-Corona Virus type 2 (SARS-CoV-2) pandemic, the theme of emotion recognition from facial expressions has become highly relevant due to the widespread use of face masks as one of the main devices imposed to counter the spread of the virus. Unsurprisingly, several studies published in the last 2 years have shown that accuracy in the recognition of basic emotions expressed by faces wearing masks is reduced. However, less is known about the impact that wearing face masks has on the ability to recognize emotions from subtle expressions. Furthermore, even less is known regarding the role of interindividual differences (such as alexithymic and autistic traits) in emotion processing. This study investigated the perception of all the six basic emotions (anger, disgust, fear, happiness, sadness, and surprise), both as a function of the face mask and as a function of the facial expressions’ intensity (full vs. subtle) in terms of participants’ uncertainty in their responses, misattribution errors, and perceived intensity. The experiment was conducted online on a large sample of participants (N = 129). Participants completed the 20-item Toronto Alexithymia Scale and the Autistic Spectrum Quotient and then performed an emotion-recognition task that involved face stimuli wearing a mask or not, and displaying full or subtle expressions. Each face stimulus was presented alongside the Geneva Emotion Wheel (GEW), and participants had to indicate what emotion they believed the other person was feeling and its intensity using the GEW. For each combination of our variables, we computed the indices of ‘uncertainty’ (i.e., the spread of responses around the correct emotion category), ‘bias’ (i.e., the systematic errors in recognition), and ‘perceived intensity’ (i.e., the distance from the center of the GEW). We found that face masks increase uncertainty for all facial expressions of emotion, except for fear when intense, and that disgust was systematically confused with anger (i.e., response bias). Furthermore, when faces were covered by the mask, all the emotions were perceived as less intense, and this was particularly evident for subtle expressions. Finally, we did not find any evidence of a relationship between these indices and alexithymic/autistic traits.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.