Background and purposeAcute stroke caused by large vessel occlusions (LVOs) requires emergent detection and treatment by endovascular thrombectomy. However, radiologic LVO detection and treatment is subject to variable delays and human expertise, resulting in morbidity. Imaging software using artificial intelligence (AI) and machine learning (ML), a branch of AI, may improve rapid frontline detection of LVO strokes. This report is a systematic review of AI in acute LVO stroke identification and triage, and characterizes LVO detection software.MethodsA systematic review of acute stroke diagnostic-focused AI studies from January 2014 to February 2019 in PubMed, Medline, and Embase using terms: ‘artificial intelligence’ or ‘machine learning or deep learning’ and ‘ischemic stroke’ or ‘large vessel occlusion’ was performed.ResultsVariations of AI, including ML methods of random forest learning (RFL) and convolutional neural networks (CNNs), are used to detect LVO strokes. Twenty studies were identified that use ML. Alberta Stroke Program Early CT Score (ASPECTS) commonly used RFL, while LVO detection typically used CNNs. Image feature detection had greater sensitivity with CNN than with RFL, 85% versus 68%. However, AI algorithm performance metrics use different standards, precluding ideal objective comparison. Four current software platforms incorporate ML: Brainomix (greatest validation of AI for ASPECTS, uses CNNs to automatically detect LVOs), General Electric, iSchemaView (largest number of perfusion study validations for thrombectomy), and Viz.ai (uses CNNs to automatically detect LVOs, then automatically activates emergency stroke treatment systems).ConclusionsAI may improve LVO stroke detection and rapid triage necessary for expedited treatment. Standardization of performance assessment is needed in future studies.
In this paper, we present a new deep learning framework for 3-D tomographic reconstruction. To this end, we map filtered back-projection-type algorithms to neural networks. However, the back-projection cannot be implemented as a fully connected layer due to its memory requirements. To overcome this problem, we propose a new type of cone-beam back-projection layer, efficiently calculating the forward pass. We derive this layer's backward pass as a projection operation. Unlike most deep learning approaches for reconstruction, our new layer permits joint optimization of correction steps in volume and projection domain. Evaluation is performed numerically on a public data set in a limited angle setting showing a consistent improvement over analytical algorithms while keeping the same computational test-time complexity by design. In the region of interest, the peak signal-to-noise ratio has increased by 23%. In addition, we show that the learned algorithm can be interpreted using known concepts from cone beam reconstruction: the network is able to automatically learn strategies such as compensation weights and apodization windows.
Machine learning-based approaches outperform competing methods in most disciplines relevant to diagnostic radiology. Interventional radiology, however, has not yet benefited substantially from the advent of deep learning, in particular because of two reasons: 1) Most images acquired during the procedure are never archived and are thus not available for learning, and 2) even if they were available, annotations would be a severe challenge due to the vast amounts of data. When considering fluoroscopy-guided procedures, an interesting alternative to true interventional fluoroscopy is in silico simulation of the procedure from 3D diagnostic CT. In this case, labeling is comparably easy and potentially readily available, yet, the appropriateness of resulting synthetic data is dependent on the forward model. In this work, we propose DeepDRR, a framework for fast and realistic simulation of fluoroscopy and digital radiography from CT scans, tightly integrated with the software platforms native to deep learning. We use machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, combined with analytic forward projection and noise injection to achieve the required performance. On the example of anatomical landmark detection in X-ray images of the pelvis, we demonstrate that machine learning models trained on DeepDRRs generalize to unseen clinically acquired data without the need for re-training or domain adaptation. Our results are promising and promote the establishment of machine learning in fluoroscopy-guided procedures.
X-ray image guidance enables percutaneous alternatives to complex procedures. Unfortunately, the indirect view onto the anatomy in addition to projective simplification substantially increase the taskload for the surgeon. Additional 3D information such as knowledge of anatomical landmarks can benefit surgical decision making in complicated scenarios. Automatic detection of these landmarks in transmission imaging is challenging since image-domain features characteristic to a certain landmark change substantially depending on the viewing direction. Consequently and to the best of our knowledge, the above problem has not yet been addressed. In this work, we present a method to automatically detect anatomical landmarks in X-ray images independent of the viewing direction. To this end, a sequential prediction framework based on convolutional layers is trained on synthetically generated data of the pelvic anatomy to predict 23 landmarks in single X-ray images. View independence is contingent on training conditions and, here, is achieved on a spherical segment covering 120 • ×90 • in LAO/RAO and CRAN/CAUD, respectively, centered around AP. On synthetic data, the proposed approach achieves a mean prediction error of 5.6 ± 4.5 mm. We demonstrate that the proposed network is immediately applicable to clinically acquired data of the pelvis. In particular, we show that our intra-operative landmark detection together with pre-operative CT enables X-ray pose estimation which, ultimately, benefits initialization of image-based 2D/3D registration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.