In this paper, we propose to train deep neural networks with biomechanical simulations, to predict the prostate motion encountered during ultrasound-guided interventions. In this application, unstructured points are sampled from segmented pre-operative MR images to represent the anatomical regions of interest. The point sets are then assigned with point-specific material properties and displacement loads, forming the un-ordered input feature vectors. An adapted PointNet can be trained to predict the nodal displacements, using finite element (FE) simulations as ground-truth data. Furthermore, a versatile bootstrap aggregating mechanism is validated to accommodate the variable number of feature vectors due to different patient geometries, comprised of a training-time bootstrap sampling and a model averaging inference. This results in a fast and accurate approximation to the FE solutions without requiring subject-specific solid meshing. Based on 160,000 nonlinear FE simulations on clinical imaging data from 320 patients, we demonstrate that the trained networks generalise to unstructured point sets sampled directly from holdout patient segmentation, yielding a near real-time inference and an expected error of 0.017 mm in predicted nodal displacement.
Purpose Minimally invasive treatments for renal carcinoma offer a low rate of complications and quick recovery. One drawback of the use of computed tomography (CT) for needle guidance is the use of iodinated contrast agents, which require an increased X-ray dose and can potentially cause adverse reactions. The purpose of this work is to generalise the problem of synthetic contrast enhancement to allow the generation of multiple phases on non-contrast CT data from a real-world, clinical dataset without training multiple convolutional neural networks. Methods A framework for switching between contrast phases by conditioning the network on the phase information is proposed and compared with separately trained networks. We then examine how the degree of supervision affects the generated contrast by evaluating three established architectures: U-Net (fully supervised), Pix2Pix (adversarial with supervision), and CycleGAN (fully adversarial). Results We demonstrate that there is no performance loss when testing the proposed method against separately trained networks. Of the training paradigms investigated, the fully adversarial CycleGAN performs the worst, while the fully supervised U-Net generates more realistic voxel intensities and performed better than Pix2Pix in generating contrast images for use in a downstream segmentation task. Lastly, two models are shown to generalise to intra-procedural data not seen during the training process, also enhancing features such as needles and ice balls relevant to interventional radiological procedures. Conclusion The proposed contrast switching framework is a feasible option for generating multiple contrast phases without the overhead of training multiple neural networks, while also being robust towards unseen data and enhancing contrast in features relevant to clinical practice.
IntroductionChronic liver disease is a growing cause of morbidity and mortality in the UK. Acute presentation with advanced disease is common and prioritisation of resources to those at highest risk at earlier disease stages is essential to improving patient outcomes. Existing prognostic tools are of limited accuracy and to date no imaging-based tools are used in clinical practice, despite multiple anatomical imaging features that worsen with disease severity.In this paper, we outline our scoping review protocol that aims to provide an overview of existing prognostic factors and models that link anatomical imaging features with clinical endpoints in chronic liver disease. This will provide a summary of the number, type and methods used by existing imaging feature-based prognostic studies and indicate if there are sufficient studies to justify future systematic reviews.Methods and analysisThe protocol was developed in accordance with existing scoping review guidelines. Searches of MEDLINE and Embase will be conducted using titles, abstracts and Medical Subject Headings restricted to publications after 1980 to ensure imaging method relevance on OvidSP. Initial screening will be undertaken by two independent reviewers. Full-text data extraction will be undertaken by three pretrained reviewers who have participated in a group data extraction session to ensure reviewer consensus and reduce inter-rater variability. Where needed, data extraction queries will be resolved by reviewer team discussion. Reporting of results will be based on grouping of related factors and their cumulative frequencies. Prognostic anatomical imaging features and clinical endpoints will be reported using descriptive statistics to summarise the number of studies, study characteristics and the statistical methods used.Ethics and disseminationEthical approval is not required as this study is based on previously published work. Findings will be disseminated by peer-reviewed publication and/or conference presentations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.