For more than half a century, emotion researchers have attempted to establish the dimensional space that most economically accounts for similarities and differences in emotional experience. Today, many researchers focus exclusively on two-dimensional models involving valence and arousal. Adopting a theoretically based approach, we show for three languages that four dimensions are needed to satisfactorily represent similarities and differences in the meaning of emotion words. In order of importance, these dimensions are evaluation-pleasantness, potency-control, activation-arousal, and unpredictability. They were identified on the basis of the applicability of 144 features representing the six components of emotions: (a) appraisals of events, (b) psychophysiological changes, (c) motor expressions, (d) action tendencies, (e) subjective experiences, and (f) emotion regulation.
To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.
The goal of this study was to examine behavioral and electrophysiological correlates of involuntary orienting toward rapidly presented angry faces in non-anxious, healthy adults using a dot-probe task in conjunction with high-density event-related potentials and a distributed source localization technique. Consistent with previous studies, participants showed hypervigilance toward angry faces, as indexed by facilitated response time for validly cued probes following angry faces and an enhanced P1 component. An opposite pattern was found for happy faces suggesting that attention was directed toward the relatively more threatening stimuli within the visual field (neutral faces). Source localization of the P1 effect for angry faces indicated increased activity within the anterior cingulate cortex, possibly reflecting conflict experienced during invalidly cued trials. No modulation of the early C1 component was found for affect or spatial attention. Furthermore, the face-sensitive N170 was not modulated by emotional expression. Results suggest that the earliest modulation of spatial attention by face stimuli is manifested in the P1 component, and provide insights about mechanisms underlying attentional orienting toward cues of threat and social disapproval. Keywords spatial attention; anger; face perception; event-related potentials; source localization Electrophysiological correlates of spatial orienting towards angry faces: A source localization studyPerception of the human face, as well as the social cues derived from it, is central to social interaction and in the communication of threat (Argyle, 1983), and occurs rapidly, within 100 ms of presentation (e.g., Liu, Harris, & Kanwisher, 2002). For healthy individuals, visual scanpaths of the human face are directed to salient features that define facial emotional expressions such as the mouth and eyes (Walker-Smith, Gale & Findlay, 1977;Mertens, Please address all correspondence to: Diego A. Pizzagalli, Ph.D., Department of Psychology, Harvard University, 1220 William James Hall, 33 Kirkland Street, Cambridge, MA 02138, USA, Phone: +1-617-496-8896, Fax: +1-617-495-3728, Email: dap@wjh.harvard.edu. Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.Disclosures Dr. Pizzagalli has received research support from GlaxoSmithKline and Merck & Co., Inc. for projects unrelated to the present study. Dr. Hofmann is a paid consultant by Organon for issues and projects unrelated to this study. Drs. Santesso and Meuret as well as Mr. Mueller, Ratner, and Roesch report no competing interests. NIH Public Access Author ManuscriptNeurops...
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic three-dimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants' recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and neuroscience research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.