Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
In this issue of JAMA Ophthalmology, Qian et al 1 present their research on artificial intelligence (AI) and deep learning algorithms for diagnosing myopic maculopathy (MM). By organizing an international competition called the Myopic Maculopathy Analysis Challenge (MMAC), 2 the authors aimed to develop and evaluate automated solutions for MM classification, lesion segmentation, and spherical equivalent prediction. This competition framework provided a standardized approach for creating and assessing AI models compared with the diagnostic performance of ophthalmologists.The MMAC competition distributed 3 subdatasets containing a total of 2306, 294, and 2003 fundus images to participants. These images, captured using color fundus photography, were provided to develop algorithms that were subsequently evaluated on independent test sets. The competition's structure facilitated a rigorous comparison of AI models, with results showing that the best-performing algorithm achieved a very good quadratic-weighted κ of 0.901 for MM classification and a dice similarity coefficient of up to 0.841 for Fuchs spot segmentation. Model ensembles, combining outputs from multiple algorithms, showed enhanced performance in MM classification and segmentation tasks, surpassing individual models and ophthalmologists.This approach, known as crowdsourcing science, leverages public capacity to address research questions. Since its introduction by Howe in 2006, 3 crowdsourcing has gained significant attention in the literature, becoming a priority for the National Library of Medicine and the National Institutes of Health (NIH). Recently, the NIH has funded 4 datageneration projects totaling $130 million to accelerate AI usage in biomedical research aimed at solving real-world problems. These projects generate flagship datasets, tools, and practice guides representing best practices in using AI in the biomedical sector 4 and are designed for broader use by the research community. Widespread sharing of these datasets and others supported by the NIH Common Fund are aimed at fueling advancements by lowering barriers to data access. Similarly, the California Healthcare Foundation sponsored a Kaggle competition in 2015 to develop models for detecting diabetic retinopathy (DR) severity. 5 In that competition, the ground truth was determined by a single clinician. In contrast, in the work by Qian et al, 1 3 specialists with different experience levels labeled the fundus photographs, highlighting a more rigorous approach to ground truth definition. This method has empowered the modeling efforts by incorporating input from experienced experts. Notably, the ensemble approach achieved higher accuracy in segmentation tasks, which ophthalmologists are not trained to perform. On the classification front, the ensemble model surpassed ophthalmologists by 7.4% higher sensitivity and 1.3% higher specificity in diagnosing MM.
In this issue of JAMA Ophthalmology, Qian et al 1 present their research on artificial intelligence (AI) and deep learning algorithms for diagnosing myopic maculopathy (MM). By organizing an international competition called the Myopic Maculopathy Analysis Challenge (MMAC), 2 the authors aimed to develop and evaluate automated solutions for MM classification, lesion segmentation, and spherical equivalent prediction. This competition framework provided a standardized approach for creating and assessing AI models compared with the diagnostic performance of ophthalmologists.The MMAC competition distributed 3 subdatasets containing a total of 2306, 294, and 2003 fundus images to participants. These images, captured using color fundus photography, were provided to develop algorithms that were subsequently evaluated on independent test sets. The competition's structure facilitated a rigorous comparison of AI models, with results showing that the best-performing algorithm achieved a very good quadratic-weighted κ of 0.901 for MM classification and a dice similarity coefficient of up to 0.841 for Fuchs spot segmentation. Model ensembles, combining outputs from multiple algorithms, showed enhanced performance in MM classification and segmentation tasks, surpassing individual models and ophthalmologists.This approach, known as crowdsourcing science, leverages public capacity to address research questions. Since its introduction by Howe in 2006, 3 crowdsourcing has gained significant attention in the literature, becoming a priority for the National Library of Medicine and the National Institutes of Health (NIH). Recently, the NIH has funded 4 datageneration projects totaling $130 million to accelerate AI usage in biomedical research aimed at solving real-world problems. These projects generate flagship datasets, tools, and practice guides representing best practices in using AI in the biomedical sector 4 and are designed for broader use by the research community. Widespread sharing of these datasets and others supported by the NIH Common Fund are aimed at fueling advancements by lowering barriers to data access. Similarly, the California Healthcare Foundation sponsored a Kaggle competition in 2015 to develop models for detecting diabetic retinopathy (DR) severity. 5 In that competition, the ground truth was determined by a single clinician. In contrast, in the work by Qian et al, 1 3 specialists with different experience levels labeled the fundus photographs, highlighting a more rigorous approach to ground truth definition. This method has empowered the modeling efforts by incorporating input from experienced experts. Notably, the ensemble approach achieved higher accuracy in segmentation tasks, which ophthalmologists are not trained to perform. On the classification front, the ensemble model surpassed ophthalmologists by 7.4% higher sensitivity and 1.3% higher specificity in diagnosing MM.
Japan leads OECD countries in medical imaging technology deployment but lacks open, large-scale medical imaging databases crucial for AI development. While Japan maintains extensive repositories, access restrictions limit their research utility, contrasting with open databases like the US Cancer Imaging Archive and UK Biobank. The 2018 Next Generation Medical Infrastructure Act attempted to address this through new data-sharing frameworks, but implementation has been limited by strict privacy regulations and institutional resistance. This data gap risks compromising AI system performance for Japanese patients and limits global medical AI advancement. The solution lies not in developing individual AI models, but in democratizing access to well-curated Japanese medical imaging data. By implementing privacy-preserving techniques and streamlining regulatory processes, Japan could enhance domestic healthcare outcomes while contributing to more robust global AI models, ultimately reclaiming its position as a leader in medical innovation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.