Automatic medical diagnosis is an emerging center of interest in computer vision as it provides unobtrusive objective information on a patient's condition. The face, as a mirror of health status, can reveal symptomatic indications of specific diseases. Thus, the detection of facial abnormalities or atypical features is at upmost importance when it comes to medical diagnostics. This survey aims to give an overview of the recent developments in medical diagnostics from facial images based on computer vision methods. Various approaches have been considered to assess facial symptoms and to eventually provide further help to the practitioners. However, the developed tools are still seldom used in clinical practice, since their reliability is still a concern due to the lack of clinical validation of the methodologies and their inadequate applicability. Nonetheless, efforts are being made to provide robust solutions suitable for healthcare environments, by dealing with practical issues such as real-time assessment or patients positioning. This survey provides an updated collection of the most relevant and innovative solutions in facial images analysis. The findings show that with the help of computer vision methods, over 30 medical conditions can be preliminarily diagnosed from the automatic detection of some of their symptoms. Furthermore, future perspectives, such as the need for interdisciplinary collaboration and collecting publicly available databases, are highlighted.
Automatic kinship verification using facial images is a relatively new and challenging research problem in computer vision. It consists in automatically predicting whether two persons have a biological kin relation by examining their facial attributes. While most of the existing works extract shallow handcrafted features from still face images, we approach this problem from spatio-temporal point of view and explore the use of both shallow texture features and deep features for characterizing faces. Promising results, especially those of deep features, are obtained on the benchmark UvA-NEMO Smile database. Our extensive experiments also show the superiority of using videos over still images, hence pointing out the important role of facial dynamics in kinship verification. Furthermore, the fusion of the two types of features (i.e. shallow spatio-temporal texture features and deep features) shows significant performance improvements compared to state-of-the-art methods.
The Kinship Face in the Wild data sets, recently published in TPAMI, are currently used as a benchmark for the evaluation of kinship verification algorithms. We recommend that these data sets are no longer used in kinship verification research unless there is a compelling reason that takes into account the nature of the images. We note that most of the image kinship pairs are cropped from the same photographs. Exploiting this cropping information, competitive but biased performance can be obtained using a simple scoring approach, taking only into account the nature of the image pairs rather than any features about kin information. To illustrate our motives, we provide classification results utilizing a simple scoring method based on the image similarity of both images of a kinship pair. Using simply the distance of the chrominance averages of the images in the Lab color space without any training or using any specific kin features, we achieve performance comparable to state-of-the-art methods. We provide the source code to prove the validity of our claims and ensure the repeatability of our experiments.
Automatic kinship verification from faces aims to determine whether two persons have a biological kin relation or not by comparing their facial attributes. This is a challenging research problem that has recently received lots of attention from the research community. However, most of the proposed methods have mainly focused on analyzing only the luminance (i.e. gray-scale) of the face images, hence discarding the chrominance (i.e. color) information which can be a useful additional cue for verifying kin relationships. This paper investigates for the first time the usefulness of color information in the verification of kinship relationships from facial images. For this purpose, we extract joint color-texture features to encode both the luminance and the chrominance information in the color images. The kinship verification performance using joint color-texture analysis is then compared against counterpart approaches using only gray-scale information. Extensive experiments using different color spaces and texture features are conducted on two benchmark databases. Our results indicate that classifying color images consistently shows superior performance in three different color spaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.