2017
DOI: 10.1007/s11263-017-1055-1
|View full text |Cite
|
Sign up to set email alerts
|

From Facial Expression Recognition to Interpersonal Relation Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
118
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 240 publications
(119 citation statements)
references
References 82 publications
1
118
0
Order By: Relevance
“…Training Settings. We split the datasets (CEW [46], CelebA [45], EmotioNet [43], ExpW [44]) into 75% for training, 10% for validation, and 15% for testing. Many of the detected expressions have a high ratio of negative to positive examples (i.e.…”
Section: Classification Of Facial Expressionsmentioning
confidence: 99%
See 2 more Smart Citations
“…Training Settings. We split the datasets (CEW [46], CelebA [45], EmotioNet [43], ExpW [44]) into 75% for training, 10% for validation, and 15% for testing. Many of the detected expressions have a high ratio of negative to positive examples (i.e.…”
Section: Classification Of Facial Expressionsmentioning
confidence: 99%
“…In order to avoid the biasing of the classifier to the most frequent class (negative class), the positive and negative examples are balanced in the training set by undersampling [47]. The ExpW [44] dataset is annotated for 6 emotional expressions and the neutral expression. In order to keep the training set balanced and diverse when training for the detection of the neutral expression, negative examples equal to positive examples are drawn from all the 6 emotional expressions.…”
Section: Classification Of Facial Expressionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Expression datasets: Several facial expression datasets have been created in the past that consist of face images labeled with discrete emotion categories [4,9,10,11,16,17,31,34,40,41,43,54,55], facial action units [4,34,36,37,43], and strengths of valence and arousal [25,27,28,40,44]. While these datasets played a significant role in the advancement of automatic facial expression analysis in terms of emotion recognition, action unit detection and valence-arousal estimation, they are not the best fit for learning a compact expression embedding space that mimics human visual preferences.…”
Section: Related Workmentioning
confidence: 99%
“…The interdisciplinary research of multimedia and sociology has been studied for many years [2,12]. Popular topics include social networks discovery [13], key actors detection [14], group activity recognition [15], and so on.…”
Section: Related Workmentioning
confidence: 99%