Core to understanding emotion are subjective experiences and their embodiment in facial behavior. Past studies have focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about emotion. We examine 45,231 reactions to 2,185 evocative videos, largely in North America, Europe, and Japan, collecting participants’ self-reported experiences in English or Japanese and manual/automated annotations of facial movement. We uncover 21 dimensions of emotion underlying experiences reported across languages. Facial expressions predict at least 12 dimensions of experience, despite individual variability. We also identify culture-specific display tendencies—many facial movements differ in intensity in Japan compared to the U.S./Canada and Europe, but represent similar experiences. These results reveal how people actually experience and express emotion: in high-dimensional, categorical, and complex fashion.
Central to science and technology are questions about how to measure facial expression. The current gold standard is the facial action coding system (FACS), which is often assumed to account for all facial muscle movements relevant to perceived emotion. However, the mapping from FACS codes to perceived emotion is not well understood. Six prototypical configurations of facial action units (AU) are sometimes assumed to account for perceived emotion, but this hypothesis remains largely untested. Here, using statistical modeling, we examine how FACS codes actually correspond to perceived emotions in a wide range of naturalistic expressions. Each of 1456 facial expressions was independently FACS coded by two experts (r = .84, κ = .84). Naive observers reported the emotions they perceived in each expression in many different ways, including emotions (N = 666); valence, arousal and appraisal dimensions (N =1116); authenticity (N = 121), and free response (N = 193). We find that facial expressions are much richer in meaning than typically assumed: At least 20 patterns of facial muscle movements captured by FACS have distinct perceived emotional meanings. Surprisingly, however, FACS codes do not offer a complete description of real-world facial expressions, capturing no more than half of the reliable variance in perceived emotion. Our findings suggest that the perceived emotional meanings of facial expressions are most accurately and efficiently represented using a wide range of carefully selected emotion concepts, such as the Cowen & Keltner (2019) taxonomy of 28 emotions. Further work is needed to characterize the anatomical bases of these facial expressions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.