Purpose In vivo myelin quantification can provide valuable noninvasive information on neuronal maturation and development, as well as insights into neurological disorders. Multiexponential analysis of multiecho T 2 relaxation is a powerful and widely applied method for the quantification of the myelin water fraction (MWF). In recent literature, the MWF is most commonly estimated using a regularized nonnegative least squares algorithm. Methods The orthogonal matching pursuit algorithm is proposed as an alternative method for the estimation of the MWF. The orthogonal matching pursuit is a greedy sparse reconstruction algorithm with a low computation complexity. For validation, both methods are compared to a ground truth using numerical simulations and a phantom model using comparable computation times. The numerical simulations were used to measure the theoretical errors, as well as the effects of varying the SNR, strength of the regularization, and resolution of the basis set. Additionally, a phantom model was used to estimate the performance of the 2 methods while including errors occurring due to the MR measurement. Lastly, 4 healthy subjects were scanned to evaluate the in vivo performance. Results The results in simulations and phantoms demonstrate that the MWFs determined with the orthogonal matching pursuit are 1.7 times more accurate as compared to the nonnegative least squares, with a comparable precision. The remaining bias of the MWF is shown to be related to the regularization of the nonnegative least squares algorithm and the Rician noise present in magnitude MR images. Conclusion The orthogonal matching pursuit algorithm provides a more accurate alternative for T 2 relaxometry myelin water quantification.
We study a caching problem that resembles a lossy Gray-Wyner network: A source produces vector samples from a Gaussian distribution, but the user is interested in the samples of only one component. The encoder first sends a cache message without any knowledge of the user's preference. Upon learning her request, a second message is provided in the update phase so as to attain the desired fidelity on that component.The cache is efficient if it exploits as much of the correlation in the source as possible, which connects to the notions of Wyner's common information (for high cache rates) and Watanabe's total correlation (for low cache rates). For the former, we extend known results for 2 Gaussians to multivariates by showing that common information is a simple linear program, which can be solved analytically for circulant correlation matrices.Total correlation in a Gaussian setting is less well-studied. We show that for bivariates and using Gaussian auxiliaries it is captured in the dominant eigenvalue of the correlation matrix. For multivariates the problem is a more difficult optimization over a non-convex domain, but we conjecture that circulant matrices may again be analytically solvable.
We study a generalization of Wyner's Common Information to Watanabe's Total Correlation. The first minimizes the description size required for a variable that can make two other random variables conditionally independent. If independence is unattainable, Watanabe's total (conditional) correlation is measure to check just how independent they have become. Following up on earlier work for scalar Gaussians, we discuss the minimization of total correlation for Gaussian vector sources. Using Gaussian auxiliaries, we show one should transform two vectors of length d into d independent pairs, after which a reverse water filling procedure distributes the minimization over all these pairs. Lastly, we show how this minimization of total conditional correlation fits a lossy coding problem by using the Gray-Wyner network as a model for a caching problem.
Caching is a technique that alleviates networks during peak hours by transmitting partial information before a request for any is made. In a lossy setting of Gaussian databases, we study a single-user model in which good caching strategies minimize the data still needed on average once the user requests a file. The encoder decides on a caching strategy by weighing the benefit from two key parameters: the prior preference for a file and the correlation among the files. Considering uniform prior preference but correlated files, caching becomes an application of Wyner's common information and Watanabe's total correlation. We show this case triggers a split: caching Gaussian sources is a non-convex optimization problem unless one spends enough rate to cache all the common information between files. Combining both correlation and user preference we explicitly characterize the full trade-off when the encoder uses Gaussian codebooks in a database of two files: we show that as the size of the cache increases, the encoder should change strategy and increasingly prioritize user preference over correlation. In this specific case we also address the loss in performance incurred if the encoder has no knowledge of the user's preference and show that this loss is bounded.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.