Purpose Artificial intelligence in computer vision has been increasingly adapted in clinical application since the implementation of neural networks, potentially providing incremental information beyond the mere detection of pathology. As its algorithmic approach propagates input variation, neural networks could be used to identify and evaluate relevant image features. In this study, we introduce a basic dataset structure and demonstrate a pertaining use case. Methods A multidimensional classification of ankle x-rays (n = 1493) rating a variety of features including fracture certainty was used to confirm its usability for separating input variations. We trained a customized neural network on the task of fracture detection using a state-of-the-art preprocessing and training protocol. By grouping the radiographs into subsets according to their image features, the influence of selected features on model performance was evaluated via selective training. Results The models trained on our dataset outperformed most comparable models of current literature with an ROC AUC of 0.943. Excluding ankle x-rays with signs of surgery improved fracture classification performance (AUC 0.955), while limiting the training set to only healthy ankles with and without fracture had no consistent effect. Conclusion Using multiclass datasets and comparing model performance, we were able to demonstrate signs of surgery as a confounding factor, which, following elimination, improved our model. Also eliminating pathologies other than fracture in contrast had no effect on model performance, suggesting a beneficial influence of feature variability for robust model training. Thus, multiclass datasets allow for evaluation of distinct image features, deepening our understanding of pathology imaging.
Objectives This study evaluated the accuracy of deep neural patchworks (DNPs), a deep learning-based segmentation framework, for automated identification of 60 cephalometric landmarks (bone-, soft tissue- and tooth-landmarks) on CT scans. The aim was to determine whether DNP could be used for routine three-dimensional cephalometric analysis in diagnostics and treatment planning in orthognathic surgery and orthodontics. Methods: Full skull CT scans of 30 adult patients (18 female, 12 male, mean age 35.6 years) were randomly divided into a training and test data set (each n = 15). Clinician A annotated 60 landmarks in all 30 CT scans. Clinician B annotated 60 landmarks in the test data set only. The DNP was trained using spherical segmentations of the adjacent tissue for each landmark. Automated landmark predictions in the separate test data set were created by calculating the center of mass of the predictions. The accuracy of the method was evaluated by comparing these annotations to the manual annotations. Results: The DNP was successfully trained to identify all 60 landmarks. The mean error of our method was 1.94 mm (SD 1.45 mm) compared to a mean error of 1.32 mm (SD 1.08 mm) for manual annotations. The minimum error was found for landmarks ANS 1.11 mm, SN 1.2 mm, and CP_R 1.25 mm. Conclusion: The DNP-algorithm was able to accurately identify cephalometric landmarks with mean errors <2 mm. This method could improve the workflow of cephalometric analysis in orthodontics and orthognathic surgery. Low training requirements while still accomplishing high precision make this method particularly promising for clinical use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.