Facial electromyography research shows that corrugator supercilii (“frowning muscle”) activity tracks the emotional valence of linguistic stimuli. Grounded or embodied accounts of language processing take such activity to reflect the simulation or “reenactment” of emotion, as part of the retrieval of word meaning (e.g., of “furious”) and/or of building a situation model (e.g., for “Mark is furious”). However, the same muscle also expresses our primary emotional evaluation of things we encounter. Language-driven affective simulation can easily be at odds with the reader's affective evaluation of what language describes (e.g., when we like Mark being furious). To examine what happens in such cases, we independently manipulated simulation valence and moral evaluative valence in short narratives. Participants first read about characters behaving in a morally laudable or objectionable fashion: this immediately led to corrugator activity reflecting positive or negative affect. Next, and critically, a positive or negative event befell these same characters. Here, the corrugator response did not track the valence of the event, but reflected both simulation and moral evaluation. This highlights the importance of unpacking coarse notions of affective meaning in language processing research into components that reflect simulation and evaluation. Our results also call for a re-evaluation of the interpretation of corrugator EMG, as well as other affect-related facial muscles and other peripheral physiological measures, as unequivocal indicators of simulation. Research should explore how such measures behave in richer and more ecologically valid language processing, such as narrative; refining our understanding of simulation within a framework of grounded language comprehension.
Facial electromyography research shows that corrugator supercilii (“frowning muscle”) activity tracks the emotional valence of linguistic stimuli. Grounded or embodied accounts of language processing take such activity to reflect the simulation or “re-enactment” of emotion, as part of the retrieval of word meaning (e.g., of “furious”) and/or of building a situation model (e.g., for “Mark is furious”). However, the same muscle also expresses our primary emotional evaluation of things we encounter. Language-driven affective simulation can easily be at odds with the reader’s affective evaluation of what language describes (e.g., when we like Mark being furious). In a previous experiment ( ‘t Hart et al., 2018 ) we demonstrated that neither language-driven simulation nor affective evaluation alone seem sufficient to explain the corrugator patterns that emerge during online language comprehension in these complex cases. Those results showed support for a multiple-drivers account of corrugator activity, where both simulation and evaluation processes contribute to the activation patterns observed in the corrugator. The study at hand replicates and extends these findings. With more refined control over when precisely affective information became available in a narrative, we again find results that speak against an interpretation of corrugator activity in terms of simulation or evaluation alone, and as such support the multiple-drivers account. Additional evidence suggests that the simulation driver involved reflects simulation at the level of situation model construction, rather than at the level of retrieving concepts from long-term memory. In all, by giving insights into how language-driven simulation meshes with the reader’s evaluative responses during an unfolding narrative, this study contributes to the understanding of affective language comprehension.
Many of our everyday emotional responses are triggered by language, and a full understanding of how people use language therefore also requires an analysis of how words elicit emotion as they are heard or read. We report a facial electromyography experiment in which we recorded corrugator supercilii, or “frowning muscle”, activity to assess how readers processed emotion-describing language in moral and minimal in/outgroup contexts. Participants read sentence-initial phrases like “Mark is angry” or “Mark is happy” after descriptions that defined the character at hand as a good person, a bad person, a member of a minimal ingroup, or a member of a minimal outgroup (realizing the latter two by classifying participants as personality “type P” and having them read about characters of “type P” or “type O”). As in our earlier work, moral group status of the character clearly modulated how readers responded to descriptions of character emotions, with more frowning to “Mark is angry” than to “Mark is happy” when the character had previously been described as morally good, but not when the character had been described as morally bad. Minimal group status, however, did not matter to how the critical phrases were processed, with more frowning to “Mark is angry” than to “Mark is happy” across the board. Our morality-based findings are compatible with a model in which readers use their emotion systems to simultaneously simulate a character’s emotion and evaluate that emotion against their own social standards. The minimal-group result does not contradict this model, but also does not provide new evidence for it.
Beyond recognizing words, parsing sentences, building situation models, and other cognitive accomplishments, language comprehension always involves some degree of emotion too, with or without awareness. Language excites, bores, or otherwise moves us, and studying how it does so is crucial. This chapter examines the potential of facial electromyography (EMG) to study language-elicited emotion. After discussing the limitations of selfreport measures, we examine various other tools to tap into emotion, and then zoom in on the electrophysiological recording of facial muscle activity. Surveying psycholinguistics, communication science, and other fields, we provide an exhaustive qualitative review of the relevant facial EMG research to date, exploring 55 affective comprehension experiments with single words, phrases, sentences, or larger pieces of discourse. We discuss the outcomes of this research, and evaluate the various practices, biases, and omissions in the field. We also present the fALC model, a new conceptual model that lays out the various potential sources of facial EMG activity during language comprehension. Our review suggests that facial EMG recording is a powerful tool for exploring the conscious as well as unconscious aspects of affective language comprehension. However, we also think it is time to take on a bit more complexity in this research field, by for example considering the possibility that multiple active generators can simultaneously contribute to an emotional facial expression, by studying how the communicator's stance and social intention can give rise to emotion, and by studying facial expressions not just as indexes of inner states, but also as social tools that enrich everyday verbal interactions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.