2021
DOI: 10.1007/s12652-021-03003-4
|View full text |Cite|
|
Sign up to set email alerts
|

RETRACTED ARTICLE: Residual attention network for deep face recognition using micro-expression image analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…Multiple MER works [84], [99], [156], [157], [158] employed residual blocks for robust recognition on smallscale ME datasets. Instead of directly applying the shortcut connection, [159] further designed a convolutionable shortcut to learn the important residual information and AffectiveNet [160] introduced an MFL module learning the low-and high-level feature parallelly to increase the discriminative capability between the inter and intra-class variations.…”
Section: Network Blockmentioning
confidence: 99%
“…Multiple MER works [84], [99], [156], [157], [158] employed residual blocks for robust recognition on smallscale ME datasets. Instead of directly applying the shortcut connection, [159] further designed a convolutionable shortcut to learn the important residual information and AffectiveNet [160] introduced an MFL module learning the low-and high-level feature parallelly to increase the discriminative capability between the inter and intra-class variations.…”
Section: Network Blockmentioning
confidence: 99%
“…Therefore, the attention mechanism has been widely used in isomorphic facial expression recognition. Chinnappa et al [ 52 ] proposed focusing attention on the microexpression analysis, fusing the feature information of channel and space dimensions, to obtain the interspatial kindred matrix. Wang et al [ 53 ] proposed a new anchor-level attention, highlighting the features from the face region.…”
Section: Related Workmentioning
confidence: 99%
“…al [179] introduced multi-scale shortcut connections into pre-trained deep networks to solve the gradient disappearance problem. Instead directly applying the shortcut connection, [180] designed a convolutionable shortcut to learn the important residual information. Furthermore, inspiblack by the residual concept, AffectiveNet introduced a MFL module learning the low-level and high-level feature parallelly to increase the discriminable capability between the inter and intra-class variations.…”
Section: A Network Blockmentioning
confidence: 99%