2021
DOI: 10.1037/emo0000712
|View full text |Cite
|
Sign up to set email alerts
|

Emotion recognition from posed and spontaneous dynamic expressions: Human observers versus machine analysis.

Abstract: The majority of research on the judgment of emotion from facial expressions has focused on deliberately posed displays, often sampled from single stimulus sets. Herein, we investigate emotion recognition from posed and spontaneous expressions, comparing classification performance between humans and machine in a cross-corpora investigation. For this, dynamic facial stimuli portraying the six basic emotions were sampled from a broad range of different databases, and then presented to human observers and a machin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

6
43
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4

Relationship

3
5

Authors

Journals

citations
Cited by 52 publications
(49 citation statements)
references
References 26 publications
6
43
0
Order By: Relevance
“…We opted for this response format to allow for direct comparability with the automatic classifiers' recognition data using pre-specified emotion labels. As shown in prior research, adding a no/other emotion escape option does not change the overall level of target emotion recognition [51]. Instead, it only prevents agreement on incorrect labels when the target emotion label is absent [52].…”
Section: Human Observersmentioning
confidence: 68%
“…We opted for this response format to allow for direct comparability with the automatic classifiers' recognition data using pre-specified emotion labels. As shown in prior research, adding a no/other emotion escape option does not change the overall level of target emotion recognition [51]. Instead, it only prevents agreement on incorrect labels when the target emotion label is absent [52].…”
Section: Human Observersmentioning
confidence: 68%
“…Those typically involved posed or acted facial behavior displaying prototypical patterns of emotional expression. In this vein, machine classification performance was found to be high for deliberately posed stimuli (Beringer et al, 2019;Skiendziel et al, 2019), but was reduced when facial expressions were spontaneous and/or subtle in their appearance (Yitzhak et al, 2017;Krumhuber et al, 2020). Unless training sets encompass large stimulus collections, automatic systems may therefore fail to generalize to the wide variety of expressive displays common in everyday life.…”
Section: Introductionmentioning
confidence: 99%
“…Given the large and growing number of choices for academics and practitioners in consumer research, there still exists little "cross-system" (i.e., between competing products) validation research that could independently inform about the relative performance indicators of AHAA (Krumhuber et al, 2019). Out of the studies available to date, only a few have directly compared different commercial classifiers (Stöckli et al, 2018).…”
Section: Abundant Choices: Classifiers Lack Cross-system Validationmentioning
confidence: 99%
“…Out of the studies available to date, only a few have directly compared different commercial classifiers (Stöckli et al, 2018). Likewise, a small number of studies has tested AHAA against human performance benchmarks on a larger number of databases (Yitzhak et al, 2017;Krumhuber et al, 2019), thereby calling the generalizability of findings derived from single stimulus sets into question.…”
Section: Abundant Choices: Classifiers Lack Cross-system Validationmentioning
confidence: 99%
See 1 more Smart Citation