2020
DOI: 10.1109/tip.2019.2956143
|View full text |Cite
|
Sign up to set email alerts
|

Region Attention Networks for Pose and Occlusion Robust Facial Expression Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
391
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 653 publications
(391 citation statements)
references
References 55 publications
0
391
0
Order By: Relevance
“…Our approach produces significantly better results than the recent studies on both metrics. For WA, we get 88.98%, which is improved by more than 2% in absolute terms or 2.4% relatively, compared to Wang et al [39]. In the UA metric, our approach is 4.05% better in absolute terms compared to [46] or 5.28% relatively.…”
Section: B Experimental Resultsmentioning
confidence: 56%
See 2 more Smart Citations
“…Our approach produces significantly better results than the recent studies on both metrics. For WA, we get 88.98%, which is improved by more than 2% in absolute terms or 2.4% relatively, compared to Wang et al [39]. In the UA metric, our approach is 4.05% better in absolute terms compared to [46] or 5.28% relatively.…”
Section: B Experimental Resultsmentioning
confidence: 56%
“…Table 4 gives the results for the RAF-DB dataset. In previous studies, the methods in [38], [39], [45] report results in WA metric, and others [46], [47] report UA metric. We report and compare with previous findings in both WA and UA metrics.…”
Section: B Experimental Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The accuracy of facial expression is affected by many factors, among them occlusion (resulted from some shelfs, for example, wearing spectacles) is one of the most prominent, and it is a hot research topic. Many methods are put forward to deal with such a problem, and many progresses have been achieved [59][60][61].…”
Section: Discussionmentioning
confidence: 99%
“…Many previous studies on automatic engagement recognition are focused on the perceived engagement (i.e., engagement received by an external people) [9]. Various automatic engagement prediction systems have been proposed using multi-modal information such as student response [11], facial [13,17,19,20] or body movements in learning videos [5,23], behavior in test quizzes [10] and even advanced physiological and neurological measures [8]. Among them, the video data is a good trade-off between capturing convenience and granularity.…”
Section: Related Workmentioning
confidence: 99%