Large, labeled datasets have driven deep learning methods to achieve expert-level performance on a variety of medical imaging tasks. We present CheXpert, a large dataset that contains 224,316 chest radiographs of 65,240 patients. We design a labeler to automatically detect the presence of 14 observations in radiology reports, capturing uncertainties inherent in radiograph interpretation. We investigate different approaches to using the uncertainty labels for training convolutional neural networks that output the probability of these observations given the available frontal and lateral radiographs. On a validation set of 200 chest radiographic studies which were manually annotated by 3 board-certified radiologists, we find that different uncertainty approaches are useful for different pathologies. We then evaluate our best model on a test set composed of 500 chest radiographic studies annotated by a consensus of 5 board-certified radiologists, and compare the performance of our model to that of 3 additional radiologists in the detection of 5 selected pathologies. On Cardiomegaly, Edema, and Pleural Effusion, the model ROC and PR curves lie above all 3 radiologist operating points. We release the dataset to the public as a standard benchmark to evaluate performance of chest radiograph interpretation models. 1
Measures of human movement dynamics can predict outcomes like injury risk or musculoskeletal disease progression. However, these measures are rarely quantified in clinical practice due to the prohibitive cost, time, and expertise required. Here we present and validate OpenCap, an open-source platform for computing movement dynamics using videos captured from smartphones. OpenCap's web application enables users to collect synchronous videos and visualize movement data that is automatically processed in the cloud, thereby eliminating the need for specialized hardware, software, and expertise. We show that OpenCap accurately predicts dynamic measures, like muscle activations, joint loads, and joint moments, which can be used to screen for disease risk, evaluate intervention efficacy, assess between-group movement differences, and inform rehabilitation decisions. Additionally, we demonstrate OpenCap's practical utility through a 100-subject field study, where a clinician using OpenCap estimated movement dynamics 25 times faster than a laboratory-based approach at less than 1% of the cost. By democratizing access to human movement analysis, OpenCap can accelerate the incorporation of biomechanical metrics into large-scale research studies, clinical trials, and clinical practice.
Background and study aims We evaluated use of artificial intelligence (AI) assisted image classifier in determining the feasibility of curative endoscopic resection of large colonic lesion based on non-magnified endoscopic images Methods AI image classifier was trained by 8,000 endoscopic images of large (≥ 2 cm) colonic lesions. The independent validation set consisted of 567 endoscopic images from 76 colonic lesions. Histology of the resected specimens was used as gold standard. Curative endoscopic resection was defined as histology no more advanced than well-differentiated adenocarcinoma, ≤ 1 mm submucosal invasion and without lymphovascular invasion, whereas non-curative resection was defined as any lesion that could not meet the above requirements. Performance of the trained AI image classifier was compared with that of endoscopists. Results In predicting endoscopic curative resection, AI had an overall accuracy of 85.5 %. Images from narrow band imaging (NBI) had significantly higher accuracy (94.3 % vs 76.0 %; P < 0.00001) and area under the ROC curve (AUROC) (0.934 vs 0.758; P = 0.002) than images from white light imaging (WLI). AI was superior to two junior endoscopists in terms of accuracy (85.5 % vs 61.9 % or 82.0 %, P < 0.05), AUROC (0.837 vs 0.638 or 0.717, P < 0.05) and confidence level (90.1 % vs 83.7 % or 78.3 %, P < 0.05). However, there was no statistical difference in accuracy and AUROC between AI and a senior endoscopist. Conclusions The trained AI image classifier based on non-magnified images can accurately predict probability of curative resection of large colonic lesions and is better than junior endoscopists. NBI images have better accuracy than WLI for AI prediction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.