2020
DOI: 10.1007/978-3-030-64559-5_46
|View full text |Cite
|
Sign up to set email alerts
|

Facial Expression Recognition and Ordinal Intensity Estimation: A Multilabel Learning Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…This process has been discredited [257] [258] because human intuition does not assign numeric value as a measure of emotional intensity. The only reported ordinal intensity estimation is our model [69], we considered FER and intensity estimation as a multilabel learning task and present a deep multilabel model, which adequately predicts the emotion and its intensity concurrently, using ordinal metrics.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This process has been discredited [257] [258] because human intuition does not assign numeric value as a measure of emotional intensity. The only reported ordinal intensity estimation is our model [69], we considered FER and intensity estimation as a multilabel learning task and present a deep multilabel model, which adequately predicts the emotion and its intensity concurrently, using ordinal metrics.…”
Section: Discussionmentioning
confidence: 99%
“…The result showed that the proposed method performed better than any existing probabilistic graphical methods in accuracy and computation time. Our approach to FER and intensity estimation is presented in [69]. We considered FER and intensity estimation a multilabel task with the motivation that an instance of a facial expression image contains information about emotion displays and the corresponding intensity.…”
Section: B Facial Expression Recognition and Intensity Estimationmentioning
confidence: 99%
“…The CNN network with sigmoid function in the output layer serves as the binary classifier. Because of our dataset population, we use the pre-trained network (VGG-16) to avoid model overfitting, which was a challenge in Ekundayo & Viriris (2020) . To reduce intraclass variation and increase interclass variation an aggregation loss (combination of island loss and BCE loss) is proposed which is another additional feature to our work in ( Ekundayo & Viriris, 2020 ).…”
Section: Introductionmentioning
confidence: 99%