Objective To develop an A.I‐based automatic descriptor that detects and grades, from selfie pictures, 23 facial signs, hairs included, as a help to making‐up procedures. Material and Methods The selfie images taken in very different conditions by 3326 women and men were used to create (90% of dataset) and validate (10% of dataset) a new algorithm architecture to appraise and grade 23 different facial signs such as lips, nose, eye color, eyebrows, eyelashes, and hair color as defined by makeup artists. Each selfie image was annotated by 12 experts and defined references to train Artificial Intelligence (A.I)‐based algorithm. Results As some the 23 signs present a continuous or discontinuous feature, these were analyzed by two different statistical approaches. The results provided by the automatic descriptor system were not only in good agreement with the expert's assessments but were even found of a better precision and reproducibility. This automatic descriptor system has proven a good and robust accuracy despite the very variable conditions in the acquisition of selfie pictures. Conclusion Such automatic descriptor system seems providing a valuable help in making‐up procedures and may extend to other activities such as Skincare or Haircare. As such it should allow large investigations to better evaluate the consumers’ needs of esthetical improvements.
Background: Real-life validation is necessary to ensure our artificial intelligence (AI) skin diagnostic tool is inclusive across a diverse and representative US population of various ages, ancestries and skin phototypes.Objectives: To explore the relevance and accuracy of an automated, algorithm-based analysis of facial signs in representative women of different ancestries, ages and phototypes, living in the same country. Methods:In a cross-sectional study of selfie images of 1041 US women, algorithmbased analyses of seven facial signs were automatically graded by an AI-based algorithm and by 50 US dermatologists of various profiles (age, gender, ancestry, geographical location). For automated analysis and dermatologist assessment, the same referential skin atlas was used to standardize the grading scales. The average values and their variability were compared with respect to age, ancestry and phototype.Results: For five signs, the grading obtained by the automated system were strongly correlated with dermatologists' assessments (r ≥ 0.75); cheek skin pores were moderately correlated (r = 0.63) and pigmentation signs, especially for the darkest skin tones, were weakly correlated (r = 0.40) to the dermatologist assessments. Age and ancestry had no effect on the correlations. In many cases, the automated system performed better than the dermatologist-assessed clinical grading due to 0.3-0.5 grading unit differences among the dermatologist panel that were not related to any individual characteristic (e.g. gender, age, ancestry, location). The use of phototypes, as discontinuous categorical variables, is likely a limiting factor in the assessments of grading, whether obtained by automated analysis or clinical assessment of the images. Conclusions:The AI-based automatic procedure is accurate and clinically relevant for analysing facial signs in a diverse and inclusive population of US women, as confirmed by a diverse panel of dermatologists, although skin tone requires further improvement.
Objective: To determine the aesthetical accordance between a given skin tone and the 11 possible colours of head hairs, covered by a marketed hair colouration product. Material and methods:The photographs of professional top models, representing several ancestries (non-Hispanic European and Euro-American, East Asian, Hispanic Euro-American, and African-American ancestries), were used to virtually modify skin tones (from light, medium to dark) and hair colour by an artificial intelligence (AI)-based algorithm. Hence, 117 modified photographs were then assessed by five local panels of about 60 women each (one in China, one in France and three in US). The same questionnaire was given to the panels, written in their own language, asking which and how both skin tones and hair colours fit preferentially (or not appreciated), asking in addition the reasons of their choices, using fixed wordings.Results: Answers from the five panels differed according to origin or cultural aspects, although some agreements were found among both non-Hispanic European and Euro-American groups. The Hispanic American panel in US globally much appreciated darker hair tones (HTs). Two panels (East Asian in China and African American in US) and part of non-Hispanic European panel in France declared appreciating all HTs, almost irrespective with the skin tone (light, medium and dark). This surprising result is very likely caused by gradings (in %) that differ by too low values, making the establishment of a decisive or significant assessment. By nature highly subjective (culturally and/or fashion driven), the assessments should be more viewed as trends, an unavoidable limit of the present virtual approach. The latter offers nevertheless a full respect of ethical rules as such objective could hardly be conducted in vivo: applying 10 or 11 hair colourations on the same individual is an unthinkable option. Conclusion:The virtual approach developed in the present study that mixes two major facial coloured phenotypes seems at the crossroad of both genetic backgrounds and the secular desire of a modified appearance. Nonetheless, this methodology could afford, at the individual level in total confidentiality, a great help to subjects exposed to some facial skin disorders or afflictions.
Objective To evaluate the capacity of the automatic detection system to accurately grade, from selfie pictures, the severity of eight facial signs in South African men. Methods Selfie pictures (obtained from frontal and back cameras) of 281 South African men differently aged (20–70 years) were obtained and analyzed by an automatic artificial intelligence (AI)‐based automatic grading system. Data were compared with the clinical gradings made by experts and dermatologists. Results In all facial signs, both series of gradings were found highly correlated with, however, different coefficients (0.59–0.95), those of marionette lines and cheek pores being of lower values. No differences were observed between data obtained by frontal and back cameras. With age, in most cases, gradings show up to the 50–59 year age‐class, linear‐like changes. When compared to men of other ancestries, South African men present lower wrinkles/texture, pigmentation, and ptosis/sagging scores till 50–59 years, albeit not much different in the cheek pores sign. The early onset (mean age) of visibility of wrinkles/texture for South African men were (i.e., reaching grade >1) 39 and 45 years for ptosis/sagging. Conclusion This study completes and enlarges the previous works conducted on men of other ancestries by showing some South African specificities and slight differences with men of comparable phototypes (Afro American).
Objective: To explore the possibility of an automatic grading algorithm to detect and quantify, from selfie pictures, the subtle changes of facial signs brought by the application of a foundation. Material and Methods:A total of 270 Japanese differently aged women (30-54y) living in four different Japanese cities took selfies before, immediately and 5 h after having applied their own foundation, with their own routine. Results:The analysis of 810 selfie pictures revealed (3 times × 270 women) that, prior to applications, all women presented a low grade of facial ageing. In most cases, the severities of less marked facial signs were detected and quantified, found affected by routine at different extents in all age-classes, but more intensely in the older age-class (45-54y) despite their more pronounced signs in bare skin status. In contrast, periorbital wrinkles were detected as more severe in all ageclasses at both timing, that is, immediately and 5 h post-application as well as Nasolabial folds 5 h post-application. The amplitude of these positive or negative changes, although found of low amplitude, as decimals of the initial grades, was significantly detected. Conclusion:This automatic system appears apt at grading subtle changes in facial ageing signs brought by a foundation and could be a valuable help to the consumers of make-up products, in refining their individual procedure to obtain a more personalized desired facial appearance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.