2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2016
DOI: 10.1109/cvprw.2016.187
|View full text |Cite
|
Sign up to set email alerts
|

Fusing Aligned and Non-aligned Face Information for Automatic Affect Recognition in the Wild: A Deep Learning Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
42
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 92 publications
(43 citation statements)
references
References 32 publications
0
42
0
1
Order By: Relevance
“…The performances using this dataset do not exceed 75.2%. This accuracy has been achieved by Pramerdorfer and Kampel [22], while other researches reached 71.2% using linear support vector machines [23], 73.73% by fusing aligned and non-aligned face information for automatic affect recognition in the wild [24], 70.66% using an end-to-end deep learning framework, based on the attentional convolutional network [25], 71.91% using three subnetworks with different depths [26] and 73.4 using a hybrid CNN-scale invariant feature transform aggregator [27].…”
Section: Introductionmentioning
confidence: 87%
See 1 more Smart Citation
“…The performances using this dataset do not exceed 75.2%. This accuracy has been achieved by Pramerdorfer and Kampel [22], while other researches reached 71.2% using linear support vector machines [23], 73.73% by fusing aligned and non-aligned face information for automatic affect recognition in the wild [24], 70.66% using an end-to-end deep learning framework, based on the attentional convolutional network [25], 71.91% using three subnetworks with different depths [26] and 73.4 using a hybrid CNN-scale invariant feature transform aggregator [27].…”
Section: Introductionmentioning
confidence: 87%
“…Pramerdorfer et al [22] 75.2 Network Ensemble -SGD Tang [23] 71.2 Loss layer -SGD Kim et al [24] 73.73 Network Ensemble Intraface SGD Minae et al [25] 70.02 Loss layer Adam Hua et al [26] 71.91 Ensemble Network Straightforward Adam Connie [27] 73. In Figure 12 the manner the VGG model learns patterns for a given input image is presented.…”
Section: Cnn Model Accuracy [%] Cnn Type Preprocessing Optimizermentioning
confidence: 99%
“…Accuracy Hand-crafted feature guided CNN [36] 61.86 AlexNet [37] 64.8 DNNRL [37] 70.6 ResNet [38] 72.4 VGG [38] 72.7 Ensemble of deep networks [39] 73.31 Alignment mapping networks + ensemble [39] 73.73 Single CNN [40] 71.47 Ensemble CNN [40] 73.73 Proposed 73.58…”
Section: Methodsmentioning
confidence: 99%
“…Inspired by the center loss [168], which penalizes the distance between deep features and their corresponding class centers, two variations were proposed to assist the supervision of the softmax loss for more discriminative features for FER: (1) island loss [140] was formalized to further increase the pairwise distances between different class centers (see Fig. 6(b)), and (2) locality-preserving loss (LP loss) [44] was [76], [146], [173] Simple Average determine the class with the highest mean score using the posterior class probabilities yielded from each individual with the same weight [76], [146], [173] Weighted Average determine the class with the highest weighted mean score using the posterior class probabilities yielded from each individual with different weights [57], [78], [147], [153] formalized to pull the locally neighboring features of the same class together so that the intra-class local clusters of each class are compact. Besides, based on the triplet loss [169], which requires one positive example to be closer to the anchor than one negative example with a fixed gap, two variations were proposed to replace or assist the supervision of the softmax loss: (1) exponential triplet-based loss [145] was formalized to give difficult samples more weight when updating the network, and (2) (N+M)-tuples cluster loss [77] was formalized to alleviate the difficulty of anchor selection and threshold validation in the triplet loss for identityinvariant FER (see Fig.…”
Section: Auxiliary Blocks and Layersmentioning
confidence: 99%