Proceedings of the 19th ACM International Conference on Multimodal Interaction 2017
DOI: 10.1145/3136755.3143004
|View full text |Cite
|
Sign up to set email alerts
|

From individual to group-level emotion recognition: EmotiW 5.0

Abstract: Research in automatic affect recognition has come a long way. This paper describes the fifth Emotion Recognition in the Wild (EmotiW) challenge 2017. EmotiW aims at providing a common benchmarking platform for researchers working on different aspects of affective computing. This year there are two sub-challenges: a) Audio-video emotion recognition and b) group-level emotion recognition. These challenges are based on the acted facial expressions in the wild and group affect databases, respectively. The particul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
130
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 180 publications
(130 citation statements)
references
References 21 publications
0
130
0
Order By: Relevance
“…Group-level affect recognition is a subdiscipline of FER, where the goal is to assess the overall expression of all persons in an image. It has been featured in the EmotiW competition in 2016 [12] and 2017 [86]. For this purpose, spatial features are typically extracted for each person, and fused in some way-e.g., by considering multiple faces as a sequence and applying an LSTM [15].…”
Section: Learning Spatial Features For Fermentioning
confidence: 99%
“…Group-level affect recognition is a subdiscipline of FER, where the goal is to assess the overall expression of all persons in an image. It has been featured in the EmotiW competition in 2016 [12] and 2017 [86]. For this purpose, spatial features are typically extracted for each person, and fused in some way-e.g., by considering multiple faces as a sequence and applying an LSTM [15].…”
Section: Learning Spatial Features For Fermentioning
confidence: 99%
“…The inception V3 network is similar to the original work [30], which was proposed for the classification on the ImageNet task GAF In the word cloud of the survey result, people mentioned some group-level emotion-related keywords such as 'violence', 'happy', 'angry', 'upset' etc. Thus, we perform experiments with joint training for GCS and group emotion (three classes positive, neutral and negative [13]). The motivation is to explore the usefulness of GCS of a group as an attribute for group emotion prediction.…”
Section: A Image-level Analysismentioning
confidence: 99%
“…1) To the best of our knowledge, this is the first study proposing AGC prediction in images; 2) We compare two cohesion models, representing scene (holistic) and face-level information respectively, and show that the former contributes more to the perception of cohesion; 3) We label and extend the Group Affect Database [13] with group cohesion labels and propose the GAF Cohesion database (sample images from the database are shown in Fig. 1); 4) From our experimental results, we observed that the perceived group emotion is related to group cohesiveness (Section VI).…”
Section: Introductionmentioning
confidence: 99%
“…Expression datasets: Several facial expression datasets have been created in the past that consist of face images labeled with discrete emotion categories [4,9,10,11,16,17,31,34,40,41,43,54,55], facial action units [4,34,36,37,43], and strengths of valence and arousal [25,27,28,40,44]. While these datasets played a significant role in the advancement of automatic facial expression analysis in terms of emotion recognition, action unit detection and valence-arousal estimation, they are not the best fit for learning a compact expression embedding space that mimics human visual preferences.…”
Section: Related Workmentioning
confidence: 99%