2022
DOI: 10.1007/s00521-022-08005-7
|View full text |Cite
|
Sign up to set email alerts
|

A deep-learning-based facial expression recognition method using textural features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(3 citation statements)
references
References 41 publications
0
3
0
Order By: Relevance
“…The CNNs method trained on images from prolonged datasets of JAFEE, Cohn-Kanade (CK +) and FER2013 changed into LTP, LBP, and CLBP image features. In [12], an effective FER was developed by employing a new DL Neural Networkregression activation (DR) classification algorithm. Primarily, the Gamma-HE model was used for pre-processing, and then facial facts were removed employing the Pyramid HOG (PHOG) based Supervised Descent (SMD) model.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The CNNs method trained on images from prolonged datasets of JAFEE, Cohn-Kanade (CK +) and FER2013 changed into LTP, LBP, and CLBP image features. In [12], an effective FER was developed by employing a new DL Neural Networkregression activation (DR) classification algorithm. Primarily, the Gamma-HE model was used for pre-processing, and then facial facts were removed employing the Pyramid HOG (PHOG) based Supervised Descent (SMD) model.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The MiniExpNet network is used, and its performance is enhanced by incorporating an attention block and a self‐distillation mechanism. The study in [15] develops a convolutional neural network (CNN) model to recognize FEs by taking advantage of textural cues, which are closely linked with changes in FEs. The combination of completed local binary pattern (CLBP) and CNN has shown better recognition rates when applied on benchmark datasets.…”
Section: Related Workmentioning
confidence: 99%
“…Accuracy (%) LBP-CNN [64] 79.5 Broad learning [65] 81 STF + LSTM [66] 82 ResNet150 [67] 89 LTP-CNN [64] 89.2 CLBP-CNN [64] 91 LBP+ORB features [68] 93.2 Deep Features + HOG [69] 94.17 Multiple attention network [70] 94.51 hybrid CNN model [71] 94.82 EmoNeXt-Tiny 95.1 Pre-trained VGG19 [72] 96 WMCNN-LSTM [73] 97.5 Pre-trained Resnet50 [72] 97.7 EmoNeXt-Small 97.78 Dynamic cascaded classifier [74] 97.84 DeepEmotion [59] 98 EmoNeXt-Base 98.44 MBCC-CNN [75] 98.48 Pre-trained MobileNet [72] 98.5 Tripe-Structure Network [26] 99.04 Inception-V3-based approach [34] 99.57 ViT + SE [76] 99.8 EfficientNet-XGBoost [37] 100 EmoNeXt-Large 100 EmoNeXt-XLarge 100…”
Section: Modelmentioning
confidence: 99%