Diffusion-weighted magnetic resonance imaging (DWI) is the only noninvasive method for quantifying microstructure and reconstructing white-matter pathways in the living human brain. Fluctuations from multiple sources create significant additive noise in DWI data which must be suppressed before subsequent microstructure analysis. We introduce a self-supervised learning method for denoising DWI data, Patch2Self (P2S), which uses the entire volume to learn a full-rank locally linear denoiser for that volume. By taking advantage of the oversampled $q$-space of DWI data, P2S can separate structure from noise without requiring an explicit model for either. The setup of P2S however can be resource intensive, both in terms of running time and memory usage, as it uses all voxels (n) from all-but-one held-in volumes (d-1) to learn a linear mapping $\Phi : \mathbb{R}^{n \times (d-1)} \mapsto \mathbb{R}^{n}$ for denoising the held-out volume. We exploit the redundancy imposed by P2S to alleviate its performance issues and inspect regions that influence the noise disproportionately. Specifically we introduce P2S-sketch, which makes a two-fold contribution: \textit{(1)} P2S-sketch uses matrix sketching to perform self-supervised denoising. By solving a sub-problem on a smaller sub-space, so called, \textit{coreset}, we show how P2S can yield a significant speedup in training time while using less memory. \textit{(2)} We show how the so-called statistical leverage scores can be used to \textit{interpret} the denoising of dMRI data, a process that was traditionally treated as a black-box. Our experiments conducted on simulated and real data clearly demonstrate that P2S via matrix sketching (P2S-sketch) does not lead to any loss in denoising quality, while yielding significant speedup and improved memory usage by training on a smaller fraction of the data. With thorough comparisons on real and simulated data, we show that Patch2Self outperforms the current state-of-the-art methods for DWI denoising both in terms of visual conspicuity and downstream modeling tasks. We demonstrate the effectiveness of our approach via multiple quantitative metrics such as fiber bundle coherence, $R^2$ via cross-validation on model fitting, mean absolute error of DTI residuals across a cohort of sixty subjects.
Interior point methods (IPMs) are a common approach for solving linear programs (LPs) with strong theoretical guarantees and solid empirical performance. The time complexity of these methods is dominated by the cost of solving a linear system of equations at each iteration. In common applications of linear programming, particularly in machine learning and scientific computing, the size of this linear system can become prohibitively large, requiring the use of iterative solvers, which provide an approximate solution to the linear system. However, approximately solving the linear system at each iteration of an IPM invalidates the theoretical guarantees of common IPM analyses. To remedy this, we theoretically and empirically analyze (slightly modified) predictor-corrector IPMs when using approximate linear solvers: our approach guarantees that, when certain conditions are satisfied, the number of IPM iterations does not increase and that the final solution remains feasible. We also provide practical instantiations of approximate linear solvers that satisfy these conditions for special classes of constraint matrices using randomized linear algebra.
Diffusion MRI typically has a low SNR on account of the noise from a variety of sources corrupting the data. The state-of-the-art denoiser Patch2Self proposed a self-supervised learning technique that uses patches from all the voxels to learn the denoising function which in practice can be resource-intensive. We, therefore, propose Patch2Self2 which performs self-supervised denoising using coresets constructed via matrix sketching, resulting in significant speedups and reduced memory usage. Our results showed that sampling-based sketching via leverage scores gave the best performance. Remarkably, leverage scores can be directly used as a statistic for interpreting influential regions hampering the denoising performance.
Linear programming (LP) is an extremely useful tool and has been successfully applied to solve various problems in a wide range of areas, including operations research, engineering, economics, or even more abstract mathematical areas such as combinatorics. It is also used in many machine learning applications, such as ℓ 1 -regularized SVMs, basis pursuit, nonnegative matrix factorization, etc. Interior Point Methods (IPMs) are one of the most popular methods to solve LPs both in theory and in practice. Their underlying complexity is dominated by the cost of solving a system of linear equations at each iteration. In this paper, we consider infeasible IPMs for the special case where the number of variables is much larger than the number of constraints. Using tools from Randomized Linear Algebra, we present a preconditioning technique that, when combined with the Conjugate Gradient iterative solver, provably guarantees that infeasible IPM algorithms (suitably modified to account for the error incurred by the approximate solver), converge to a feasible, approximately optimal solution, without increasing their iteration complexity. Our empirical evaluations verify our theoretical results on both real-world and synthetic data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.