Background
Early diagnosis and accurate classification are pivotal for realizing precision medicine and personalized treatment in managing retinal detachment. In this study, we devised a multi-modal diagnostic system that leverages B-scan, OCT, and UWF imaging technologies to achieve a precise classification of retinal detachment.
Methods
From May 2020 to April 2023, we collected UWF, B-scan, and OCT images for each patient. An automatic model was developed to segment retinal detachment lesions on the UWF images. These images were then input into five transfer learning models for feature extraction. Using PCA, we reduced the dimensionality of the features from each model. Next, we sequentially fused the model features and modality features to create multi-modality feature subsets. Through Spearman correlation and lasso regression, we identified key features. Based on these key features, we developed five machine learning models and selected the top-performing one as our primary multi-modality model. This optimal algorithm was also used to construct single-modality models for B-scan, OCT, and UWF images. Lastly, we compared the performance of the multi-modality and single-modality models on various validation sets.
Results
The multi-modality model exhibited better performance than the best single-modality model (UWF model), showing improvements of 6.79% and 0.04 in accuracy and AUC on the internal validation set (90.3%, 0.96), and 6.54% and 0.03 on the external validation set (83.0%, 0.94) respectively.
Conclusion
We developed a multi-modal deep transfer learning model using UWF, B-scan, and OCT images and achieved excellent performance. This model could serve as a diagnostic tool in clinical practice, assisting ophthalmologists in the initial classification of retinal detachment cases.