Automatic kinship verification using facial images is a relatively new and challenging research problem in computer vision. It consists in automatically predicting whether two persons have a biological kin relation by examining their facial attributes. While most of the existing works extract shallow handcrafted features from still face images, we approach this problem from spatio-temporal point of view and explore the use of both shallow texture features and deep features for characterizing faces. Promising results, especially those of deep features, are obtained on the benchmark UvA-NEMO Smile database. Our extensive experiments also show the superiority of using videos over still images, hence pointing out the important role of facial dynamics in kinship verification. Furthermore, the fusion of the two types of features (i.e. shallow spatio-temporal texture features and deep features) shows significant performance improvements compared to state-of-the-art methods.
The Kinship Face in the Wild data sets, recently published in TPAMI, are currently used as a benchmark for the evaluation of kinship verification algorithms. We recommend that these data sets are no longer used in kinship verification research unless there is a compelling reason that takes into account the nature of the images. We note that most of the image kinship pairs are cropped from the same photographs. Exploiting this cropping information, competitive but biased performance can be obtained using a simple scoring approach, taking only into account the nature of the image pairs rather than any features about kin information. To illustrate our motives, we provide classification results utilizing a simple scoring method based on the image similarity of both images of a kinship pair. Using simply the distance of the chrominance averages of the images in the Lab color space without any training or using any specific kin features, we achieve performance comparable to state-of-the-art methods. We provide the source code to prove the validity of our claims and ensure the repeatability of our experiments.
Automatic face recognition in unconstrained environments is a challenging task. To test current trends in face recognition algorithms, we organized an evaluation on face recognition in mobile environment. This paper presents the results of 8 different participants using two verification metrics. Most submitted algorithms rely on one or more of three types of features: local binary patterns, Gabor wavelet responses including Gabor phases, and color information. The best results are obtained from UNILJ-ALP, which fused several image representations and feature types, and UC-HU, which learns optimal features with a convolutional neural network. Additionally, we assess the usability of the algorithms in mobile devices with limited resources.
Automatic kinship verification from faces aims to determine whether two persons have a biological kin relation or not by comparing their facial attributes. This is a challenging research problem that has recently received lots of attention from the research community. However, most of the proposed methods have mainly focused on analyzing only the luminance (i.e. gray-scale) of the face images, hence discarding the chrominance (i.e. color) information which can be a useful additional cue for verifying kin relationships. This paper investigates for the first time the usefulness of color information in the verification of kinship relationships from facial images. For this purpose, we extract joint color-texture features to encode both the luminance and the chrominance information in the color images. The kinship verification performance using joint color-texture analysis is then compared against counterpart approaches using only gray-scale information. Extensive experiments using different color spaces and texture features are conducted on two benchmark databases. Our results indicate that classifying color images consistently shows superior performance in three different color spaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.