The most common way to train a deep learning model for medical image classification purposes, including for ophthalmic images, involves supervised learning in which training data are manually labeled by trained human graders.Then, transfer learning may be applied to a pretrained "off-the-shelf" model backbone, such as VGG and ResNet, 1 and model fine-tuning is performed with the labeled ophthalmic data. This common workflow is limited by the time-consuming and laborintensive nature of training data annotation. One promising approach to address this limitation is self-supervised learning (SSL), which is the focus of the article by Gholami et al 2 in JAMA Ophthalmology. As the name suggests, SSL is a technique that obviates the need for human annotation of the training data.Gholami et al 2 explored how to incorporate SSL into model development for detection of macular telangiectasia in optical coherence tomography (OCT) images. Their study had 3 main findings. First, if a decent amount of labeled training data was available (using 100% of the available labeled training data in their simulation), then first finetuning the model via SSL could incrementally boost model performance compared with traditional supervised learning (TSL). Second, if only a small amount of labeled training data was available (using 10% of the available labeled training data in their simulation), then an SSL-first approach dramatically outperformed a TSL-only approach. Third, their SSL model fine-tuned with only 10% of available labeled training data achieved performance comparable to that of the best model in their study.