Purpose: Patients with Type II Diabetes are at increased risk Diabetic Retinopathy (DR). Early detection and treatment of DR is critical for saving vision and preventing blindness. To address the need for increased DR detection and referrals, we evaluated the use of artificial intelligence (AI) for screening DR.
Methods: Patient images were obtained using a 45-degree Canon Non-Mydriatic CR-2 Plus AF retinal camera in the Department of Endocrinology Clinic (Newark, NJ) and in a community screening event (Newark, NJ). Images were initially classified by an on-site grader and uploaded for analysis by EyeArt, a cloud-based AI software developed by Eyenuk (California, USA). The images were also graded by an off-site retina specialist. Using Fleiss kappa analysis, a correlation was investigated between the three grading systems, the AI, onsite grader, and a US board-certified retina specialist, for a diagnosis of DR and referral pattern.
Results: The EyeArt results, onsite grader, and the retina specialist had a 79% overall agreement on the diagnosis of DR. The kappa value for concordance on a diagnosis was 0.69 (95% CI: 0.61-0.77), indicating substantial agreement. Referral patterns by EyeArt, the onsite grader, and the ophthalmologist had an 85% overall agreement. The kappa value for concordance on “whether to refer” was 0.70 (95% CI: 0.60-0.80), indicating substantial agreement.
Conclusions: This retrospective cross-sectional analysis offers insights into use of AI in diabetic screenings and the significant role it will play in automated detection of DR. The EyeArt readings were beneficial with some limitations in a community screening environment.
ObjectivesQuantification of academic productivity relies on bibliometric measurements, such as the Hirsch index (h‐index). The National Institutes of Health (NIH) recently developed the relative citation ratio (RCR), an article‐level, citation‐driven metric that compares researchers with others within their respective fields. Our study is the first to compare the usage of RCR in academic otolaryngology.Study DesignRetrospective Database Review.MethodsAcademic otolaryngology residency programs were identified using the 2022 Fellowship and Residency Electronic Interactive Database. Demographic and training data were collected for surgeons using institutional websites. RCR was calculated using the NIH iCite tool, and h‐index was calculated using Scopus. Mean RCR (m‐RCR) is the average score of the author's articles. Weighted RCR (w‐RCR) is the sum of all article scores. These derivatives are a measure of impact and output, respectively. The career duration of a physician was categorized into the following cohorts: 0–10, 11–20, 21–30, and 31+ years.ResultsA total of 1949 academic otolaryngologists were identified. Men had higher h‐indices and w‐RCRs than women (both p less than 0.001). m‐RCR was not different between genders (p = 0.083). There was a difference in h‐index and w‐RCR (both p less than 0.001) among the career duration cohorts, but there was no difference in m‐RCR among the cohorts (p = 0.416). The faculty rank professor was the greatest for all metrics (p < 0.001).ConclusionCritics of the h‐index argue that it is reflective of the time a researcher has spent in the field, instead of impact. The RCR may reduce historic bias against women and younger otolaryngologists.Level of EvidenceN/A Laryngoscope, 2023
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.