Precis:
Pegasus outperformed 5 of the 6 ophthalmologists in terms of diagnostic performance, and there was no statistically significant difference between the deep learning system and the “best case” consensus between the ophthalmologists. The agreement between Pegasus and gold standard was 0.715, whereas the highest ophthalmologist agreement with the gold standard was 0.613. Furthermore, the high sensitivity of Pegasus makes it a valuable tool for screening patients with glaucomatous optic neuropathy.
Purpose:
The purpose of this study was to evaluate the performance of a deep learning system for the identification of glaucomatous optic neuropathy.
Materials and Methods:
Six ophthalmologists and the deep learning system, Pegasus, graded 110 color fundus photographs in this retrospective single-center study. Patient images were randomly sampled from the Singapore Malay Eye Study. Ophthalmologists and Pegasus were compared with each other and to the original clinical diagnosis given by the Singapore Malay Eye Study, which was defined as the gold standard. Pegasus’ performance was compared with the “best case” consensus scenario, which was the combination of ophthalmologists whose consensus opinion most closely matched the gold standard. The performance of the ophthalmologists and Pegasus, at the binary classification of nonglaucoma versus glaucoma from fundus photographs, was assessed in terms of sensitivity, specificity and the area under the receiver operating characteristic curve (AUROC), and the intraobserver and interobserver agreements were determined.
Results:
Pegasus achieved an AUROC of 92.6% compared with ophthalmologist AUROCs that ranged from 69.6% to 84.9% and the “best case” consensus scenario AUROC of 89.1%. Pegasus had a sensitivity of 83.7% and a specificity of 88.2%, whereas the ophthalmologists’ sensitivity ranged from 61.3% to 81.6% and specificity ranged from 80.0% to 94.1%. The agreement between Pegasus and gold standard was 0.715, whereas the highest ophthalmologist agreement with the gold standard was 0.613. Intraobserver agreement ranged from 0.62 to 0.97 for ophthalmologists and was perfect (1.00) for Pegasus. The deep learning system took ∼10% of the time of the ophthalmologists in determining classification.
Conclusions:
Pegasus outperformed 5 of the 6 ophthalmologists in terms of diagnostic performance, and there was no statistically significant difference between the deep learning system and the “best case” consensus between the ophthalmologists. The high sensitivity of Pegasus makes it a valuable tool for screening patients with glaucomatous optic neuropathy. Future work will extend this study to a larger sample of patients.
Treatment of vertical osseous defects with nonporous or porous polytetrafluoroethylene membranes in combination with a xenograft resulted in statistically significant improvement in open and closed probing measurements, with no significant difference between treatment groups.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.