2022
DOI: 10.1002/mrc.5289
|View full text |Cite
|
Sign up to set email alerts
|

Input layer regularization for magnetic resonance relaxometry biexponential parameter estimation

Abstract: Many methods have been developed for estimating the parameters of biexponential decay signals, which arise throughout magnetic resonance relaxometry (MRR) and the physical sciences. This is an intrinsically ill‐posed problem so that estimates can depend strongly on noise and underlying parameter values. Regularization has proven to be a remarkably efficient procedure for providing more reliable solutions to ill‐posed problems, while, more recently, neural networks have been used for parameter estimation. We re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 57 publications
0
6
0
Order By: Relevance
“…We provide a more general version of the theorem in the Supplementary Information; here the choice of D as a finite difference stencil was taken to align the result with the experimental setup used in [1] Figure 1. We sampled X in Equation 5 from a standard Gaussian ensemble and computed the left singular vectors of the first layer pre-and post-descrambling of the network in [30]. Note that the left singular vectors of the descrambled weight matrix are perfectly oscillating: this is because X as a standard Gaussian ensemble together with a linear wiretapped layer yields a descrambler P (N ) ≈ TrU , which results in P W ≈ TrΣV so that the left singular vectors of the descrambled weight matrix are given by Tr…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…We provide a more general version of the theorem in the Supplementary Information; here the choice of D as a finite difference stencil was taken to align the result with the experimental setup used in [1] Figure 1. We sampled X in Equation 5 from a standard Gaussian ensemble and computed the left singular vectors of the first layer pre-and post-descrambling of the network in [30]. Note that the left singular vectors of the descrambled weight matrix are perfectly oscillating: this is because X as a standard Gaussian ensemble together with a linear wiretapped layer yields a descrambler P (N ) ≈ TrU , which results in P W ≈ TrΣV so that the left singular vectors of the descrambled weight matrix are given by Tr…”
Section: Resultsmentioning
confidence: 99%
“…The NN with the concatenated native and smoothed versions of the decay curve is termed (ND, Reg), with Reg indicating the smooth decay generated by the regularized nonlinear least squares analysis. This strategy of training on both noisy and smooth data is termed input layer regularization, and improves parameter estimation by 5-10 percent as compared to the more conventional NN estimation of parameters from noisy decay curves [30]. We find that the right singular vectors corresponding to the largest singular values of the first layer are biexponential curves, so that the network learns an input signal library in the class of its training data.…”
Section: Applicationsmentioning
confidence: 90%
See 3 more Smart Citations