2019
DOI: 10.1371/journal.pone.0211735
|View full text |Cite|
|
Sign up to set email alerts
|

Using computer-vision and machine learning to automate facial coding of positive and negative affect intensity

Abstract: Facial expressions are fundamental to interpersonal communication, including social interaction, and allow people of different ages, cultures, and languages to quickly and reliably convey emotional information. Historically, facial expression research has followed from discrete emotion theories, which posit a limited number of distinct affective states that are represented with specific patterns of facial action. Much less work has focused on dimensional features of emotion, particularly positive and negative … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
29
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 31 publications
(30 citation statements)
references
References 61 publications
1
29
0
Order By: Relevance
“…To our knowledge, this is the first study of its kind to include dynamic facial expressions as direct input into a cognitive model, although similar model-based approaches are becoming increasingly common in cognitive neuroscience (Turner, Forstmann, Love, Palmeri, & Van Maanen, 2017). Further, work using automated facial expression coding is gaining traction in social and behavioral sciences due to its efficiency relative to human coders (e.g., Cheong, Brooks, & Chang, 2017;Haines et al, 2019). Future work would benefit from combining automated facial expression coding with behavioral paradigms that collect self-reports of emotion (e.g., Rutledge et al, 2014), which would both allow for more strenuous validity tests of automated measures and create opportunities for exploring the relationships between unobservable and observable emotional states.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…To our knowledge, this is the first study of its kind to include dynamic facial expressions as direct input into a cognitive model, although similar model-based approaches are becoming increasingly common in cognitive neuroscience (Turner, Forstmann, Love, Palmeri, & Van Maanen, 2017). Further, work using automated facial expression coding is gaining traction in social and behavioral sciences due to its efficiency relative to human coders (e.g., Cheong, Brooks, & Chang, 2017;Haines et al, 2019). Future work would benefit from combining automated facial expression coding with behavioral paradigms that collect self-reports of emotion (e.g., Rutledge et al, 2014), which would both allow for more strenuous validity tests of automated measures and create opportunities for exploring the relationships between unobservable and observable emotional states.…”
Section: Discussionmentioning
confidence: 99%
“…The AFEC model first uses FACET-a computer vision software (iMotions, 2018)-to detect the presence of 20 different facial action units (Ekman, Friesen, & Hager, 2002), which are then translated to affect intensity ratings using a machine learning model that we previously validated. In our validation study, the model showed correlations with human observer ratings of .89 and .76 for positive and negative affect intensity, respectively (for more details, see Haines et al, 2019). Figure 5 shows the steps used to preprocess and apply the AFEC model to our participants' facial expressions in response to outcome feedback.…”
Section: Automated Facial Expression Codingmentioning
confidence: 94%
See 2 more Smart Citations
“…In one preliminary study only a small set of pictures -three pleasant and unpleasant emotional scenes -were used to elicit facial responses with moderate to good classification performance on a categorical analysis level (Stöckli et al, 2018). The other study demonstrated good prediction of unpleasant versus pleasant facial responses with an AU-based machine learning procedure (Haines et al, 2019). Unfortunately, in both studies there was no neutral picture category as a comparative condition.…”
Section: Automatic Facial Codingmentioning
confidence: 99%