2017 4th International Conference on Information Science and Control Engineering (ICISCE) 2017
DOI: 10.1109/icisce.2017.95
|View full text |Cite
|
Sign up to set email alerts
|

Joint Face Detection and Facial Expression Recognition with MTCNN

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
65
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 211 publications
(65 citation statements)
references
References 6 publications
0
65
0
Order By: Relevance
“…To extract the representations that were generated by the DCNNs, we ran the trained models in evaluation mode on a predefined set of image stimuli (see section 3.1 above). The face images were first aligned using the MTCNN face alignment algorithm (Xiang & Zhu, 2017). Following alignment, the images were normalized with the standard ImageNet normalization ( M = [0.485,0.456,0.406], SD = [0.229,0.224,0.225]).…”
Section: Methodsmentioning
confidence: 99%
“…To extract the representations that were generated by the DCNNs, we ran the trained models in evaluation mode on a predefined set of image stimuli (see section 3.1 above). The face images were first aligned using the MTCNN face alignment algorithm (Xiang & Zhu, 2017). Following alignment, the images were normalized with the standard ImageNet normalization ( M = [0.485,0.456,0.406], SD = [0.229,0.224,0.225]).…”
Section: Methodsmentioning
confidence: 99%
“…Besides, some preprocessing algorithm may also offer the ROI information for generating corresponding face masks. For example, we can use the facial landmark detection model [ 41 ] to predict an accurate center point of the ROI region, and generate the corresponding mask for face images.…”
Section: Methodsmentioning
confidence: 99%
“…MORPH contains 55,000 face images of 13,617 identities from 16 to 77 years old. Following the prior works [10,11,17], we first extract facial regions of 200 × 200 pixels using MTCNN [18], and then resize them to 128 × 128 resolution. We split the dataset into training and test set in a ratio of 90:10 respectively.…”
Section: Methodsmentioning
confidence: 99%