Training deep learning models on medical images heavily depends on experts’ expensive and laborious manual labels. In addition, these images, labels, and even models themselves are not widely publicly accessible and suffer from various kinds of bias and imbalances. In this paper, chest X-ray pre-trained model via self-supervised contrastive learning (CheSS) was proposed to learn models with various representations in chest radiographs (CXRs). Our contribution is a publicly accessible pretrained model trained with a 4.8-M CXR dataset using self-supervised learning with a contrastive learning and its validation with various kinds of downstream tasks including classification on the 6-class diseases in internal dataset, diseases classification in CheXpert, bone suppression, and nodule generation. When compared to a scratch model, on the 6-class classification test dataset, we achieved 28.5% increase in accuracy. On the CheXpert dataset, we achieved 1.3% increase in mean area under the receiver operating characteristic curve on the full dataset and 11.4% increase only using 1% data in stress test manner. On bone suppression with perceptual loss, we achieved improvement in peak signal to noise ratio from 34.99 to 37.77, structural similarity index measure from 0.976 to 0.977, and root-square-mean error from 4.410 to 3.301 when compared to ImageNet pretrained model. Finally, on nodule generation, we achieved improvement in Fréchet inception distance from 24.06 to 17.07. Our study showed the decent transferability of CheSS weights. CheSS weights can help researchers overcome data imbalance, data shortage, and inaccessibility of medical image datasets. CheSS weight is available at https://github.com/mi2rl/CheSS.
Undetected obstructive sleep apnea (OSA) can lead to consequences of severe systematic disease. The lateral cephalogram in orthodontics is a valuable screening tool. We hypothesized that a deep learning-based classifier might be able to differentiate sleep apnea as anatomical features that humans do not recognize in lateral cephalogram. Moreover, since the imaging devices used by each hospital could be different, various modalities in radiography need to be overcome in real clinical practice. Therefore, we proposed a knowledge distillation deep learning model to classify patients into OSA and non-OSA groups using the lateral cephalogram and to overcome modality differences simultaneously. Lateral cephalograms of 500 OSA patients and 500 non-OSA patients from two different devices were included. ResNet-50 and ResNet-50 with a feature-based knowledge distillation model were trained and their suitability for classification and modality normalization were compared. Through knowledge distillation, it was confirmed through ROC analysis and Grad-CAM that our model exhibits high performance without being deceived by features caused by modality differences. By checking the probability values predicting OSA, an improvement in overcoming the modality differences in lateral cephalogram was observed, which could be applied in the actual clinical situation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.