Purpose: The SARS-CoV-2 RNA has been detected in tears and conjunctival samples from infected individuals. Conjunctivitis is also reported in a small number of cases. We evaluated ocular symptoms and ocular tropism of SARS-CoV-2 in a group of patients with COVID-19. Method: Fifty-six patients infected with SARS-CoV-2 were recruited as subjects. Relevant medical histories were obtained from the electronic medical record system. Ocular history and ocular symptoms data were obtained by communicating directly with the subjects. The Ocular Surface Disease Index (OSDI) and Salisbury Eye Evaluation Questionnaire (SEEQ) were used to assess the anterior ocular surface condition before and after the onset of disease. Results: Patients classified as severe COVID-19 cases were more likely to have hypertension compared to mild cases (p = 0.035). Of the 56 subjects, thirteen patients (23%) were infected in Wuhan, 32 patients (57%) were community-infected, 10 patients (18%) were unknown origin, 1 (2%) was a physician likely infected by a confirmed patient. Three patients wore face mask with precaution when contacting the confirmed patients. Fifteen (27%) had aggravated ocular symptoms, of which 6 (11%) had prodromal ocular symptoms before disease onset. The differences in mean scores of OSDI questionnaire and SEEQ between before and after onset of COVID-19 were all significant (p < 0.05 for both). Conclusions: Ocular symptoms are relatively common in COVID-19 disease and may appear just before the onset of respiratory symptoms. Our data provided the anecdotal evidences of transmission of SARS-CoV-2 via ocular surface.We thank all the patients for their participation. We are grateful to the attending physicians in isolated wards of the First
No author has a financial or proprietary interest in any material or method mentioned.
Summary Background Pioneering effort has been made to facilitate the recognition of pathology in malignancies based on whole‐slide images (WSIs) through deep learning approaches. It remains unclear whether we can accurately detect and locate basal cell carcinoma (BCC) using smartphone‐captured images. Objectives To develop deep neural network frameworks for accurate BCC recognition and segmentation based on smartphone‐captured microscopic ocular images (MOIs). Methods We collected a total of 8046 MOIs, 6610 of which had binary classification labels and the other 1436 had pixelwise annotations. Meanwhile, 128 WSIs were collected for comparison. Two deep learning frameworks were created. The ‘cascade’ framework had a classification model for identifying hard cases (images with low prediction confidence) and a segmentation model for further in‐depth analysis of the hard cases. The ‘segmentation’ framework directly segmented and classified all images. Sensitivity, specificity and area under the curve (AUC) were used to evaluate the overall performance of BCC recognition. Results The MOI‐ and WSI‐based models achieved comparable AUCs around 0·95. The ‘cascade’ framework achieved 0·93 sensitivity and 0·91 specificity. The ‘segmentation’ framework was more accurate but required more computational resources, achieving 0·97 sensitivity, 0·94 specificity and 0·987 AUC. The runtime of the ‘segmentation’ framework was 15·3 ± 3·9 s per image, whereas the ‘cascade’ framework took 4·1 ± 1·4 s. Additionally, the ‘segmentation’ framework achieved 0·863 mean intersection over union. Conclusions Based on the accessible MOIs via smartphone photography, we developed two deep learning frameworks for recognizing BCC pathology with high sensitivity and specificity. This work opens a new avenue for automatic BCC diagnosis in different clinical scenarios. What's already known about this topic? The diagnosis of basal cell carcinoma (BCC) is labour intensive due to the large number of images to be examined, especially when consecutive slide reading is needed in Mohs surgery. Deep learning approaches have demonstrated promising results on pathological image‐related diagnostic tasks. Previous studies have focused on whole‐slide images (WSIs) and leveraged classification on image patches for detecting and localizing breast cancer metastases. What does this study add? Instead of WSIs, microscopic ocular images (MOIs) photographed from microscope eyepieces using smartphone cameras were used to develop neural network models for recognizing BCC automatically. The MOI‐ and WSI‐based models achieved comparable areas under the curve around 0·95. Two deep learning frameworks for recognizing BCC pathology were developed with high sensitivity and specificity. Recognizing BCC through a smartphone could be considered a future clinical choice.
Background: Pathologic myopia (PM) associated with myopic maculopathy (MM) and “Plus” lesions is a major cause of irreversible visual impairment worldwide. Therefore, we aimed to develop a series of deep learning algorithms and artificial intelligence (AI)–models for automatic PM identification, MM classification, and “Plus” lesion detection based on retinal fundus images.Materials and Methods: Consecutive 37,659 retinal fundus images from 32,419 patients were collected. After excluding 5,649 ungradable images, a total dataset of 32,010 color retinal fundus images was manually graded for training and cross-validation according to the META-PM classification. We also retrospectively recruited 1,000 images from 732 patients from the three other hospitals in Zhejiang Province, serving as the external validation dataset. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, and quadratic-weighted kappa score were calculated to evaluate the classification algorithms. The precision, recall, and F1-score were calculated to evaluate the object detection algorithms. The performance of all the algorithms was compared with the experts’ performance. To better understand the algorithms and clarify the direction of optimization, misclassification and visualization heatmap analyses were performed.Results: In five-fold cross-validation, algorithm I achieved robust performance, with accuracy = 97.36% (95% CI: 0.9697, 0.9775), AUC = 0.995 (95% CI: 0.9933, 0.9967), sensitivity = 93.92% (95% CI: 0.9333, 0.9451), and specificity = 98.19% (95% CI: 0.9787, 0.9852). The macro-AUC, accuracy, and quadratic-weighted kappa were 0.979, 96.74% (95% CI: 0.963, 0.9718), and 0.988 (95% CI: 0.986, 0.990) for algorithm II. Algorithm III achieved an accuracy of 0.9703 to 0.9941 for classifying the “Plus” lesions and an F1-score of 0.6855 to 0.8890 for detecting and localizing lesions. The performance metrics in external validation dataset were comparable to those of the experts and were slightly inferior to those of cross-validation.Conclusion: Our algorithms and AI-models were confirmed to achieve robust performance in real-world conditions. The application of our algorithms and AI-models has promise for facilitating clinical diagnosis and healthcare screening for PM on a large scale.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.