2012
DOI: 10.1068/p7052
|View full text |Cite
|
Sign up to set email alerts
|

The Identification of Unfolding Facial Expressions

Abstract: We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s−1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
21
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(23 citation statements)
references
References 64 publications
2
21
0
Order By: Relevance
“…This leaves the door open to future research to further investigate if there is a minimum level of dynamic motion necessary or static presentation necessary to elicit this predictive mechanism. Also, although facial stimuli morphs like those used here are common in the emotion processing literature, more ecologically valid stimuli (including seeing actual faces) which entail onset latencies that vary with each facial feature [39] may more dramatically reveal the predictive mechanism shown to be at play here. Also, as discussed above, it is worth considering the potential influence of ethnicity on our results.…”
Section: Discussionmentioning
confidence: 99%
“…This leaves the door open to future research to further investigate if there is a minimum level of dynamic motion necessary or static presentation necessary to elicit this predictive mechanism. Also, although facial stimuli morphs like those used here are common in the emotion processing literature, more ecologically valid stimuli (including seeing actual faces) which entail onset latencies that vary with each facial feature [39] may more dramatically reveal the predictive mechanism shown to be at play here. Also, as discussed above, it is worth considering the potential influence of ethnicity on our results.…”
Section: Discussionmentioning
confidence: 99%
“…Fiorentini et al, 2012). The question of the way in which we recognise emotion expressions and the types of inferences we make when we view them is relevant for research on emotion perception such as emotion processing in the brain.…”
Section: Discussionmentioning
confidence: 99%
“…For example, using the Facial Action Coding System (FACS) action units (AUs) description from Ekman, Friesen, and Hager (2002), Fiorentini, Schmidt, and Viviani (2012) identified a number of characteristics present across emotion expressions for their actors, including the temporal sequence of the onset of different AUs and the total number of AUs activated. Across their actors, the lowest number of AUs activated (3) was found for the happy facial expression, whereas the highest number activated (10) was found for the fearful facial expression.…”
Section: Discussionmentioning
confidence: 99%
“…Similar to Fiorentini and colleagues (2012), the number of AUs required for each emotion in the ADFES varied, with three AUs being necessary for inclusion for happiness and six AUs for fear. In addition to variations in the number and location of activated AUs across emotions, the order of the onset of different AUs resulted in differences in confusability or errors in adults’ identification of emotions in Fiorentini and colleagues (2012). This indicates that the timing of the changes in the face is also important for disentangling complex emotions.…”
Section: Discussionmentioning
confidence: 99%