The removal of embedded blast-generated fragments from soft tissue is very difficult, especially in the head and neck regions. First, because many retained foreign materials are non-metallic and can, therefore, not be detected by fluoroscopy, and second, because a broad exploration of the soft tissue is not possible in the facial area for functional and cosmetic reasons. Intraoperative navigation computer-assisted surgery (CAS) may facilitate the retrieval of foreign bodies and reduce exploration trauma. In a blind trial, five test specimens of different materials (glass, metal, wood, plastic, and stone) were inserted on the left and right sides of the head and neck of ten body donors through an intraoral incision. A second physician then detected and removed the foreign bodies from one side of the body without and from the other side of the body with navigation. We measured the duration of surgery, the extent of tissue trauma caused during surgery, the time it took to remove the foreign bodies, and the subjective evaluation of the usefulness of navigation. With the aid of the navigation system, the various foreign bodies were detected after an average of 26.7 (±35.1) s (p < 0.0001) and removed after an average of 79.1 (±66.2) s (p = 0.0239), with an average incision length of 10.0 (±3.5) mm. Without the navigation system, the foreign bodies were located after an average of 86.5 (±77.7) s and removed after an average of 74.1 (±45.9) s, with an average incision length of 13.0 mm (±3.6) mm (=0.0007). Intraoperative navigation systems are a valuable tool for removing foreign bodies from the soft tissue of the face and neck. Both the duration of surgery and the incision length can be reduced using navigation systems. Depending on the material of the foreign bodies and the signal intensity in the CT/MRI scanner, however, the detection reliability varies. All in all, navigation is considered to be a useful tool.
3D imaging enables a more accurate diagnosis by providing spatial information about organ anatomy. However, using 3D images to train AI models is computationally challenging because they consist of tens or hundreds of times more pixels than their 2D counterparts. To train with high-resolution 3D images, convolutional neural networks typically resort to downsampling them or projecting them to two dimensions. In this work, we propose an effective alternative, a novel neural network architecture that enables computationally efficient classification of 3D medical images in their full resolution. Compared to off-theshelf convolutional neural networks, 3D-GMIC uses 77.98%-90.05% less GPU memory and 91.23%-96.02% less computation. While our network is trained only with image-level labels, without segmentation labels, it explains its classification predictions by providing pixel-level saliency maps. On a dataset collected at NYU Langone Health, including 85,526 patients with full-field 2D mammography (FFDM), synthetic 2D mammography, and 3D mammography (DBT), our model, the 3D Globally-Aware Multiple Instance Classifier (3D-GMIC), achieves a breast-wise AUC of 0.831 (95% CI: 0.769-0.887) in classifying breasts with malignant findings using DBT images. As DBT and 2D mammography capture different information, averaging predictions on 2D and 3D mammography together leads to a diverse ensemble with an improved breast-wise AUC of 0.841 (95% CI: 0.768-0.895). Our model generalizes well to an external dataset from Duke University Hospital, achieving an imagewise AUC of 0.848 (95% CI: 0.798-0.896) in classifying DBT images with malignant findings.
This study aimed to evaluate the effect of an artificial intelligence (AI) support system on breast ultrasound diagnostic accuracy. In this Health Insurance Portability and Accountability Act–compliant, institutional review board–approved retrospective study, 200 lesions (155 benign, 45 malignant) were randomly selected from consecutive ultrasound-guided biopsies (June 2017–January 2019). Two readers, blinded to clinical history and pathology, evaluated lesions with and without an Food and Drug Administration–approved AI software. Lesion features, Breast Imaging Reporting and Data System (BI-RADS) rating (1–5), reader confidence level (1–5), and AI BI-RADS equivalent (1–5) were recorded. Statistical analysis was performed for diagnostic accuracy, negative predictive value, positive predictive value (PPV), sensitivity, and specificity of reader versus AI BI-RADS. Generalized estimating equation analysis was used for reader versus AI accuracy regarding lesion features and AI impact on low-confidence score lesions. Artificial intelligence effect on false-positive biopsy rate was determined. Statistical tests were conducted at a 2-sided 5% significance level. There was no significant difference in accuracy (73 vs 69.8%), negative predictive value (100% vs 98.5%), PPV (45.5 vs 42.4%), sensitivity (100% vs 96.7%), and specificity (65.2 vs 61.9; P = 0.118–0.409) for AI versus pooled reader assessment. Artificial intelligence was more accurate than readers for irregular shape (74.1% vs 57.4%, P = 0.002) and less accurate for round shape (26.5% vs 50.0%, P = 0.049). Artificial intelligence improved diagnostic accuracy for reader-rated low-confidence lesions with increased PPV (24.7% AI vs 19.3%, P = 0.004) and specificity (57.8% vs 44.6%, P = 0.008). Artificial intelligence decision support aid may help improve sonographic diagnostic accuracy, particularly in cases with low reader confidence, thereby decreasing false-positives.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.