Background Assessment of psoriasis severity is strongly observer-dependent, and objective assessment tools are largely missing. The increasing number of patients receiving highly expensive therapies that are reimbursed only for moderate-to-severe psoriasis motivates the development of higher quality assessment tools. Objective To establish an accurate and objective psoriasis assessment method based on segmenting images by machine learning technology. Methods In this retrospective, non-interventional, single-centred, interdisciplinary study of diagnostic accuracy, 259 standardized photographs of Caucasian patients were assessed and typical psoriatic lesions were labelled. Two hundred and three of those were used to train and validate an assessment algorithm which was then tested on the remaining 56 photographs. The results of the algorithm assessment were compared with manually marked area, as well as with the affected area determined by trained dermatologists. Results Algorithm assessment achieved accuracy of more than 90% in 77% of the images and differed on average 5.9% from manually marked areas. The difference between algorithm-predicted and photograph-based estimated areas by physicians was 8.1% on average. Conclusion The study shows the potential of the evaluated technology. In contrast to the Psoriasis Area and Severity Index (PASI), it allows for objective evaluation and should therefore be developed further as an alternative method to human assessment.
Purpose Diagnosis of ocular graft-versus-host disease (oGVHD) is hampered by a lack of clinically-validated biomarkers. This study aims to predict disease severity on the basis of tear protein expression in mild oGVHD. Methods Forty-nine patients with and without chronic oGVHD after AHCT were recruited to a cross-sectional observational study. Patients were stratified using NIH guidelines for oGVHD severity: NIH 0 (none; n = 14), NIH 1 (mild; n = 9), NIH 2 (moderate; n = 16), and NIH 3 (severe; n = 10). The proteomic profile of tears was analyzed using liquid chromatography-tandem mass spectrometry. Random forest and penalized logistic regression were used to generate classification and prediction models to stratify patients according to disease severity. Results Mass spectrometry detected 785 proteins across all samples. A random forest model used to classify patients by disease grade achieved F1-measure values for correct classification of 0.95 (NIH 0), 0.8 (NIH 1), 0.74 (NIH 2), and 0.83 (NIH 3). A penalized logistic regression model was generated by comparing patients without oGVHD and those with mild oGVHD and applied to identify potential biomarkers present early in disease. A panel of 13 discriminant markers achieved significant diagnostic accuracy in identifying patients with moderate-to-severe disease. Conclusions Our work demonstrates the utility of tear protein biomarkers in classifying oGVHD severity and adds further evidence indicating ocular surface inflammation as a main driver of oGVHD clinical phenotype. Translational Relevance Expression levels of a 13-marker tear protein panel in AHCT patients with mild oGVHD may predict development of more severe oGVHD clinical phenotypes.
Background The exact location of skin lesions is key in clinical dermatology. On one hand, it supports differential diagnosis (DD) since most skin conditions have specific predilection sites. On the other hand, location matters for dermatosurgical interventions. In practice, lesion evaluation is not well standardized and anatomical descriptions vary or lack altogether. Automated determination of anatomical location could benefit both situations.Objective Establish an automated method to determine anatomical regions in clinical patient pictures and evaluate the gain in DD performance of a deep learning model (DLM) when trained with lesion locations and images.Methods Retrospective study based on three datasets: macro-anatomy for the main body regions with 6000 patient pictures partially labelled by a student, micro-anatomy for the ear region with 182 pictures labelled by a student and DD with 3347 pictures of 16 diseases determined by dermatologists in clinical settings. For each dataset, a DLM was trained and evaluated on an independent test set. The primary outcome measures were the precision and sensitivity with 95% CI. For DD, we compared the performance of a DLM trained with lesion pictures only with a DLM trained with both pictures and locations. ResultsThe average precision and sensitivity were 85% (CI 84-86), 84% (CI 83-85) for macro-anatomy, 81% (CI 80-83), 80% (CI 77-83) for micro-anatomy and 82% (CI 78-85), 81% (CI 77-84) for DD. We observed an improvement in DD performance of 6% (McNemar test P-value 0.0009) for both average precision and sensitivity when training with both lesion pictures and locations.Conclusion Including location can be beneficial for DD DLM performance. The proposed method can generate body region maps from patient pictures and even reach surgery relevant anatomical precision, e.g. the ear region. Our method enables automated search of large clinical databases and make targeted anatomical image retrieval possible.
Even though standard dermatological images are relatively easy to take, the availability and public release of such dataset for machine learning is notoriously limited due to medical data legal constraints, availability of field experts for annotation, numerous and sometimes rare diseases, large variance of skin pigmentation or the presence of identifying factors such as fingerprints or tattoos. With these generic issues in mind, we explore the application of Generative Adversarial Networks (GANs) to three different types of images showing full hands, skin lesions, and varying degrees of eczema. A first model generates realistic images of all three types with a focus on the technical application of data augmentation. A perceptual study conducted with laypeople confirms that generated skin images cannot be distinguished from real data. Next, we propose models to add eczema lesions to healthy skin, respectively to remove eczema from patient skin using segmentation masks in a supervised learning setting. Such models allow to leverage existing unrelated skin pictures and enable non-technical applications, e.g. in aesthetic dermatology. Finally, we combine both models for eczema addition and removal in an entirely unsupervised process based on CycleGAN. Although eczema can no longer be placed in particular areas, we achieve convincing results for eczema removal without relying on ground truth annotations anymore.
Objectives: Pustular psoriasis (PP) is one of the most severe and chronic skin conditions. Its treatment is difficult, and measurements of its severity are highly dependent on clinicians’ experience. Pustules and brown spots are the main efflorescences of the disease and directly correlate with its activity. We propose an automated deep learning model (DLM) to quantify lesions in terms of count and surface percentage from patient photographs. Methods: In this retrospective study, two dermatologists and a student labeled 151 photographs of PP patients for pustules and brown spots. The DLM was trained and validated with 121 photographs, keeping 30 photographs as a test set to assess the DLM performance on unseen data. We also evaluated our DLM on 213 unstandardized, out-of-distribution photographs of various pustular disorders (referred to as the pustular set), which were ranked from 0 (no disease) to 4 (very severe) by one dermatologist for disease severity. The agreement between the DLM predictions and experts’ labels was evaluated with the intraclass correlation coefficient (ICC) for the test set and Spearman correlation (SC) coefficient for the pustular set. Results: On the test set, the DLM achieved an ICC of 0.97 (95% confidence interval [CI], 0.97–0.98) for count and 0.93 (95% CI, 0.92–0.94) for surface percentage. On the pustular set, the DLM reached a SC coefficient of 0.66 (95% CI, 0.60–0.74) for count and 0.80 (95% CI, 0.75–0.83) for surface percentage. Conclusions: The proposed method quantifies efflorescences from PP photographs reliably and automatically, enabling a precise and objective evaluation of disease activity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.