Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.
Facial palsy caused by nerve damage results in loss of facial symmetry and expression. A reliable palsy grading system for large-scale applications is still missing in the literature. Although numerous approaches have been reported on facial palsy quantification and grading, most employ hand-crafted features on relatively smaller datasets which limit the classification accuracy due to non-optimal face representation. In contrast, convolutional neural networks (CNNs) automatically learn the discriminative features facilitating the accurate classification of underlying tasks. In this paper, we propose to apply a typical deep network on a large dataset to extract palsy-specific features from face images. To prevent the inherent limitation of overfitting frequently occurring in CNNs, a generative adversial network (GAN) is applied to augment the training dataset. The deeply learned features are then used to classify the palsy disease into five benchmarked grades. The experimental results show that the proposed approach offers superior palsy grading performance compared to some existing methods. Such an approach is useful for palsy grading at large scale, such as primary health care.
Bilateral facial asymmetry is frequently exhibited by humans but its combined evaluation across demographic traits including gender and ethnicity is still an open research problem. In this study we measure and evaluate facial asymmetry across gender and different ethnic groups and investigate the differences in asymmetric facial dimensions among the subjects from two public face datasets, the MORPH and FERET. To this end, we detect 28 facial asymmetric dimensions from each face image using an anthropometric technique. An exploratory analysis is then performed via a multiple linear regression model to determine the impact of gender and ethnicity on facial asymmetry. Post-hoc Tukey test has been used to validate the results of the proposed method. The results show that out of 28 asymmetric dimensions, females differ in 25 dimensions from males. African, Asian, Hispanic and other ethnic groups have asymmetric dimensions that differ significantly from those of Europeans. These findings could be important to certain applications like the design of facial fits, as well as guidelines for facial cosmetic surgeons. Lastly, we train a neural network classifier that employs asymmetric dimensions for gender and race classification. The experimental results show that our trained classifier outperforms the support vector machine (SVM) and k-nearest neighbors (kNN) classifiers.
Demographic estimation of human face images involves estimation of age group, gender, and race, which finds many applications, such as access control, forensics, and surveillance. Demographic estimation can help in designing such algorithms which lead to better understanding of the facial aging process and face recognition. Such a study has two parts-demographic estimation and subsequent face recognition and retrieval. In this paper, first we extract facial-asymmetry-based demographic informative features to estimate the age group, gender, and race of a given face image. The demographic features are then used to recognize and retrieve face images. Comparison of the demographic estimates from a state-of-the-art algorithm and the proposed approach is also presented. Experimental results on two longitudinal face datasets, the MORPH II and FERET, show that the proposed approach can compete the existing methods to recognize face images across aging variations.
Histopathological image analysis is an examination of tissue under a light microscope for cancerous disease diagnosis. Computer-assisted diagnosis (CAD) systems work well by diagnosing cancer from histopathology images. However, stain variability in histopathology images is inevitable due to the use of different staining processes, operator ability, and scanner specifications. These stain variations present in histopathology images affect the accuracy of the CAD systems. Various stain normalization techniques have been developed to cope with inter-variability issues, allowing standardizing the appearance of images. However, in stain normalization, these methods rely on the single reference image rather than incorporate color distributions of the entire dataset. In this paper, we design a novel machine learning-based model that takes advantage of whole dataset distributions as well as color statistics of a single target image instead of relying only on a single target image. The proposed deep model, called stain acclimation generative adversarial network (SA-GAN), consists of one generator and two discriminators. The generator maps the input images from the source domain to the target domain. Among discriminators, the first discriminator forces the generated images to maintain the color patterns as of target domain. While second discriminator forces the generated images to preserve the structure contents as of source domain. The proposed model is trained using a color attribute metric, extracted from a selected template image. Therefore, the designed model not only learns dataset-specific staining properties but also image-specific textural contents. Evaluated results on four different histopathology datasets show the efficacy of SA-GAN to acclimate stain contents and enhance the quality of normalization by obtaining the highest values of performance metrics. Additionally, the proposed method is also evaluated for multiclass cancer type classification task, showing a 6.9% improvement in accuracy on ICIAR 2018 hidden test data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.