2010
DOI: 10.1002/mrm.22736
|View full text |Cite
|
Sign up to set email alerts
|

Sensitivity encoding reconstruction with nonlocal total variation regularization

Abstract: In sensitivity encoding reconstruction, the issue of ill conditioning becomes serious and thus the signal-to-noise ratio becomes poor when a large acceleration factor is employed. Total variation (TV) regularization has been used to address this issue and shown to better preserve sharp edges than Tikhonov regularization but may cause blocky effect. In this article, we study nonlocal TV regularization for noise suppression in sensitivity encoding reconstruction. The nonlocal TV regularization method extends the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
64
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 87 publications
(65 citation statements)
references
References 30 publications
1
64
0
Order By: Relevance
“…In the equation (5), () x qu and () x qv represent a small patch with a size of mm  ( m is an odd integer) centering at the coordinates u and v respectively in the image x , x Z signifies a normalization factor,  denotes a scale parameter which is related to the patch size and the standard deviation of noise, it controls to what extent similarity between patches is enforced, and  is a positive value used to control the nonlocality of the method and to speed up computation, which means that only the neighbors in a window with a size of   around the target pixel are considered when calculating the nonlocal image gradient [10].…”
Section: Theory and Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…In the equation (5), () x qu and () x qv represent a small patch with a size of mm  ( m is an odd integer) centering at the coordinates u and v respectively in the image x , x Z signifies a normalization factor,  denotes a scale parameter which is related to the patch size and the standard deviation of noise, it controls to what extent similarity between patches is enforced, and  is a positive value used to control the nonlocality of the method and to speed up computation, which means that only the neighbors in a window with a size of   around the target pixel are considered when calculating the nonlocal image gradient [10].…”
Section: Theory and Algorithmmentioning
confidence: 99%
“…Consequently, it blurs some details and causes blocking effect with fine structures lost, although the edges preserved in reconstruction. To overcome the intrinsic drawback of the TV model, nonlocal total variation (NLTV) reconstruction methods were proposed [9,10]. These methods can avoid blocking effects effectively but not unable to find similar patches accurately since structural information has been seriously degraded by the undersampled k-space reconstruction.…”
Section: Introductionmentioning
confidence: 99%
“…In order to update the dictionary, we have to consider all samples. By taking the derivative of the functional with respect to , we obtain the following gradient descent update rule (16) After the gradient descent updating, the columns of the designed dictionary ( ) are additionally constrained to be of unit norm so as to avoid the scaling ambiguity [19]. One property should be noted is that each dictionary updating can be considered as a refinement operation.…”
Section: B Bregman Technique For Solving Subproblem (10)mentioning
confidence: 99%
“…This property has been successfully applied in image denoising problems [13]- [15] and many authors have also incorporated this nonlocal information into the CS recovery problems. For instance, Liang et al [16] applied the nonlocal total variation (NLTV) regularization to reduce the blocky effect introduced by TV regularization. This method replaces the conventional gradient functional used in TV with a weighted nonlocal gradient function and obtains an improvement in signal-to-noise ratio of reconstruction quality in parallel imaging.…”
mentioning
confidence: 99%
“…Recently, it has received much more attention that exploiting the inherent sparsity in the image by similarity property and nonlocal operation on corresponding overlapping image patches [8,9]. An important research topic is sparse representation method with dictionary learning (DL), which builds adaptive sparse representation basis from particular image instances for sparse coding.…”
Section: Introductionmentioning
confidence: 99%