2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) 2017
DOI: 10.1109/sibgrapi.2017.60
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Database Facial Expression Recognition Based on Fine-Tuned Deep Convolutional Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
48
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 76 publications
(52 citation statements)
references
References 22 publications
0
48
0
Order By: Relevance
“…For CK+ dataset, Our ECAN achieves much better performance than most other methods except [47] that conducted cross-dataset facial expression recognition combining six labcontrolled datasets as the source domain. It is worth noting that CK+ and these lab-controlled datasets are very similar in many respects, such as the controlled collection environment, subject characters, illumination condition and head postures.…”
Section: Resultsmentioning
confidence: 94%
See 1 more Smart Citation
“…For CK+ dataset, Our ECAN achieves much better performance than most other methods except [47] that conducted cross-dataset facial expression recognition combining six labcontrolled datasets as the source domain. It is worth noting that CK+ and these lab-controlled datasets are very similar in many respects, such as the controlled collection environment, subject characters, illumination condition and head postures.…”
Section: Resultsmentioning
confidence: 94%
“…So we further evaluate our method using RAF-DB 2.0 with other three lab-controlled datasets as our multisource domain, and achieve the best cross-dataset performance 89.69%. On the other hand, when compared to the CK+ indataset results in Table III For JAFFE dataset, which is a highly biased dataset in respect of gender and ethnicity, i.e., it only contains ten Japanese females, the fine-tuning technique used in [47] which reported high accuracy on CK+ dataset is no longer effective in this context. In contrast, by matching the marginal and conditional distribution and also the class distribution across domains, our method yields the best performance and is For MMI dataset, our ECAN achieves 69.89% cross-dataset accuracy, which outperforms all the other compared methods and also the in-dataset performance shown in Table III-B. Comparing with the baselines, we can find that the original CNN structure is inferior than some previous methods that use more similar source dataset (such as CK+) with target MMI dataset and the original MMD (CNN+MMD) only gains a negligible improvement.…”
Section: Resultsmentioning
confidence: 97%
“…have been shown to be very effective. However, the most recent approaches are based on deep convolutional neural networks (CNNs) [9], [11], [13], [17], [20], [21], [22], [23]. However, one of the main drawbacks is the amount of data required to train such deep networks.…”
Section: Introductionmentioning
confidence: 99%
“…This leads to a challenging problem, namely, the cross-database nonfrontal FER problem. To cope with this challenge, many effective approaches such as subspace-based methods and based on deep learning models had been proposed in recent years [16], [17], [18] [19], [20], [21], [22], [23]. To leverage the distribution discrepancy between training and testing facial expression images, in our preliminary work in [19], zheng et al proposed a novel transductive transfer subspace learning method to jointly learn a discriminative subspace and to predict the label values of the unlabelled facial expression images by using all labelled training samples from source domain and an unlabelled auxiliary testing samples set from target domain.…”
mentioning
confidence: 99%
“…In [20], Wei et al proposed a deep nonlinear feature coding framework for unsupervised cross-domain FER problem, which introduce domain divergence minimization by Maximum Mean Discrepancy (MMD) and kernelization coding to build on a marginalized stacked denoising auto-encoder for extracting very efficient deep features. Zavarez et al [21] utilize the fine-tune trick in deep convolutional network for cross-database video-based FER problem in several well-established facial expression databases. However, these methods of cross-database FER are typically based on the frontal and near-frontal facial samples or the samples of each domain with single view-point in their experiments.…”
mentioning
confidence: 99%