Background: Artificial intelligence (AI) has great potential to detect fungal keratitis using in vivo confocal microscopy images, but its clinical value remains unclarified. A major limitation of its clinical utility is the lack of explainability and interpretability.Methods: An explainable AI (XAI) system based on Gradient-weighted Class Activation Mapping (Grad-CAM) and Guided Grad-CAM was established. In this randomized controlled trial, nine ophthalmologists (three expert ophthalmologists, three competent ophthalmologists, and three novice ophthalmologists) read images in each of the conditions: unassisted, AI-assisted, or XAI-assisted. In unassisted condition, only the original IVCM images were shown to the readers. AI assistance comprised a histogram of model prediction probability. For XAI assistance, explanatory maps were additionally shown. The accuracy, sensitivity, and specificity were calculated against an adjudicated reference standard. Moreover, the time spent was measured.Results: Both forms of algorithmic assistance increased the accuracy and sensitivity of competent and novice ophthalmologists significantly without reducing specificity. The improvement was more pronounced in XAI-assisted condition than that in AI-assisted condition. Time spent with XAI assistance was not significantly different from that without assistance.Conclusion: AI has shown great promise in improving the accuracy of ophthalmologists. The inexperienced readers are more likely to benefit from the XAI system. With better interpretability and explainability, XAI-assistance can boost ophthalmologist performance beyond what is achievable by the reader alone or with black-box AI assistance.
Purpose
Accurate identification of corneal layers with in vivo confocal microscopy (IVCM) is essential for the correct assessment of corneal lesions. This project aims to obtain a reliable automated identification of corneal layers from IVCM images.
Methods
A total of 7957 IVCM images were included for model training and testing. Scanning depth information and pixel information of IVCM images were used to build the classification system. Firstly, two base classifiers based on convolutional neural networks and K-nearest neighbors were constructed. Second, two hybrid strategies, namely weighted voting method and light gradient boosting machine (LightGBM) algorithm were used to fuse the results from the two base classifiers and obtain the final classification. Finally, the confidence of prediction results was stratified to help find out model errors.
Results
Both two hybrid systems outperformed the two base classifiers. The weighted area under the curve, weighted precision, weighted recall, and weighted F1 score were 0.9841, 0.9096, 0.9145, and 0.9111 for weighted voting hybrid system, and were 0.9794, 0.9039, 0.9055, and 0.9034 for the light gradient boosting machine stacking hybrid system, respectively. More than one-half of the misclassified samples were found using the confidence stratification method.
Conclusions
The proposed hybrid approach could effectively integrate the scanning depth and pixel information of IVCM images, allowing for the accurate identification of corneal layers for grossly normal IVCM images. The confidence stratification approach was useful to find out misclassification of the system.
Translational Relevance
The proposed hybrid approach lays important groundwork for the automatic identification of the corneal layer for IVCM images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.