Attributing the pixels of an input image to a certain category is an important and well-studied problem in computer vision, with applications ranging from weakly supervised localisation to understanding hidden effects in the data. In recent years, approaches based on interpreting a previously trained neural network classifier have become the de facto state-of-the-art and are commonly used on medical as well as natural image datasets. In this paper, we discuss a limitation of these approaches which may lead to only a subset of the category specific features being detected. To address this problem we develop a novel feature attribution technique based on Wasserstein Generative Adversarial Networks (WGAN), which does not suffer from this limitation. We show that our proposed method performs substantially better than the state-of-the-art for visual attribution on a synthetic dataset and on real 3D neuroimaging data from patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD). For AD patients the method produces compellingly realistic disease effect maps which are very close to the observed effects.
Segmentation of anatomical structures and pathologies is inherently ambiguous. For instance, structure borders may not be clearly visible or different experts may have different styles of annotating. The majority of current state-of-the-art methods do not account for such ambiguities but rather learn a single mapping from image to segmentation. In this work, we propose a novel method to model the conditional probability distribution of the segmentations given an input image. We derive a hierarchical probabilistic model, in which separate latent variables are responsible for modelling the segmentation at different resolutions. Inference in this model can be efficiently performed using the variational autoencoder framework. We show that our proposed method can be used to generate significantly more realistic and diverse segmentation samples compared to recent related work, both, when trained with annotations from a single or multiple annotators. The code for this paper is freely available at https://github.com/baumgach/PHiSeg-code.
Algorithms for Magnetic Resonance (MR) image reconstruction from undersampled measurements exploit prior information to compensate for missing k-space data. Deep learning (DL) provides a powerful framework for extracting such information from existing image datasets, through learning, and then using it for reconstruction. Leveraging this, recent methods employed DL to learn mappings from undersampled to fully sampled images using paired datasets, including undersampled and corresponding fully sampled images, integrating prior knowledge implicitly. In this article, we propose an alternative approach that learns the probability distribution of fully sampled MR images using unsupervised DL, specifically Variational Autoencoders (VAE), and use this as an explicit prior term in reconstruction, completely decoupling the encoding operation from the prior. The resulting reconstruction algorithm enjoys a powerful image prior to compensate for missing k-space data without requiring paired datasets for training nor being prone to associated sensitivities, such as deviations in undersampling patterns used in training and test time or coil settings. We evaluated the proposed method with T1 weighted images from a publicly available dataset, multicoil complex images acquired from healthy volunteers (N=8) and images with white matter lesions. The proposed algorithm, using the VAE prior, produced visually high quality reconstructions and achieved low RMSE values, outperforming most of the alternative methods on the same dataset. On multi-coil complex data, the algorithm yielded accurate magnitude and phase reconstruction results. In the experiments on images with white matter lesions, the method faithfully reconstructed the lesions.
While human experts excel in and rely on identifying an abnormal structure when assessing a medical scan, without necessarily specifying the type, current unsupervised abnormality detection methods are far from being practical. Recently proposed deep-learning (DL) based methods were initial attempts at showing the capabilities of this approach. In this work, we propose an outlier detection method combining image restoration with unsupervised learning based on DL. A normal anatomy prior is learned by training a Gaussian Mixture Variational Auto-Encoder (GMVAE) on images from healthy individuals. This prior is then used in a Maximum-A-Posteriori (MAP) restoration model to detect outliers. Abnormal lesions, not represented in the prior, are removed from the images during restoration to satisfy the prior and the difference between original and restored images form the detection of the method. We evaluated the proposed method on Magnetic Resonance Images (MRI) of patients with brain tumors and compared against previous baselines. Experimental results indicate that the method is capable of detecting lesions in the brain and achieves improvement over the current state of the art.
BackgroundIntravoxel incoherent motion (IVIM) imaging of diffusion and perfusion in the heart suffers from high parameter estimation error. The purpose of this work is to improve cardiac IVIM parameter mapping using Bayesian inference.MethodsA second-order motion-compensated diffusion weighted spin-echo sequence with navigator-based slice tracking was implemented to collect cardiac IVIM data in early systole in eight healthy subjects on a clinical 1.5 T CMR system. IVIM data were encoded along six gradient optimized directions with b-values of 0–300 s/mm2. Subjects were scanned twice in two scan sessions one week apart to assess intra-subject reproducibility. Bayesian shrinkage prior (BSP) inference was implemented to determine IVIM parameters (diffusion D, perfusion fraction F and pseudo-diffusion D*). Results were compared to least-squares (LSQ) parameter estimation. Signal-to-noise ratio (SNR) requirements for a given fitting error were assessed for the two methods using simulated data. Reproducibility analysis of parameter estimation in-vivo using BSP and LSQ was performed.ResultsBSP resulted in reduced SNR requirements when compared to LSQ in simulations. In-vivo, BSP analysis yielded IVIM parameter maps with smaller intra-myocardial variability and higher estimation certainty relative to LSQ. Mean IVIM parameter estimates in eight healthy subjects were (LSQ/BSP): 1.63 ± 0.28/1.51 ± 0.14·10−3 mm2/s for D, 13.13 ± 19.81/13.11 ± 5.95% for F and 201.45 ± 313.23/13.11 ± 14.53·10−3 mm2/s for D ∗. Parameter variation across all volunteers and measurements was lower with BSP compared to LSQ (coefficient of variation BSP vs. LSQ: 9% vs. 17% for D, 45% vs. 151% for F and 111% vs. 155% for D ∗). In addition, reproducibility of the IVIM parameter estimates was higher with BSP compared to LSQ (Bland-Altman coefficients of repeatability BSP vs. LSQ: 0.21 vs. 0.26·10−3 mm2/s for D, 5.55 vs. 6.91% for F and 15.06 vs. 422.80·10−3 mm2/s for D*).ConclusionRobust free-breathing cardiac IVIM data acquisition in early systole is possible with the proposed method. BSP analysis yields improved IVIM parameter maps relative to conventional LSQ fitting with fewer outliers, improved estimation certainty and higher reproducibility. IVIM parameter mapping holds promise for myocardial perfusion measurements without the need for contrast agents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.