Objective
To assess the performance of a hybrid Transformer-based convolutional neural network (CNN) model for automated detection of keratoconus in stand-alone Scheimpflug-based dynamic corneal deformation videos (DCDV).
Design
Retrospective cohort study.
Methods
We used transfer learning for feature extraction from DCDVs. These feature maps were augmented by self-attention to model long-range dependencies before classification to directly identify keratoconus. Model performance was evaluated by objective accuracy metrics based on DCDVs from two independent cohorts with 275 and 546 subjects.
Main outcome measures:
Area under the receiver operating characteristics curve (AUC), accuracy, specificity, sensitivity, and F1 score.
Results
The sensitivity and specificity of the model in detecting keratoconus were 93% and 84%, respectively. The AUC of the keratoconus probability score based on the external validation database was 0.97.
Conclusions
The hybrid Transformer-based model was highly sensitive and specific in discriminating normal from keratoconic eyes using DCDV(s) at levels that may prove useful in clinical practice.
Translational Relevance
The hybrid Transformer-based model can detect keratoconus from non-invasive corneal videos directly without requiring corneal topography or tomography exhibiting potential application in corneal research and clinical practice.