2015
DOI: 10.3233/bme-151468
|View full text |Cite
|
Sign up to set email alerts
|

A CT reconstruction approach from sparse projection with adaptive-weighted diagonal total-variation in biomedical application

Abstract: Abstract. For lack of directivity in Total Variation (TV) which only uses x-coordinate and y-coordinate gradient transform as its sparse representation approach during the iteration process, this paper brought in Adaptive-weighted Diagonal Total Variation (AwDTV) that uses the diagonal direction gradient to constraint reconstructed image and adds associated weights which are expressed as an exponential function and can be adaptively adjusted by the local image-intensity diagonal gradient for the purpose of pre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…Instead of relying only on MSE-based loss, ISRGAN is trained by the combination of of VGG based perceptual loss and adversarial loss. To further improve the quality of super-resolution images, inspired by adaptive total variation [24], [25] and diagonal total variation [26], [27] model to take full advantage of directional information of edges and textures, a novel strategy is introduced to improve the performance of super-resolution based on GAN coupled with the total-variation based model. Furthermore, a new adaptive model based on diagonal total variation is proposed to keep texture details.…”
Section: Related Workmentioning
confidence: 99%
“…Instead of relying only on MSE-based loss, ISRGAN is trained by the combination of of VGG based perceptual loss and adversarial loss. To further improve the quality of super-resolution images, inspired by adaptive total variation [24], [25] and diagonal total variation [26], [27] model to take full advantage of directional information of edges and textures, a novel strategy is introduced to improve the performance of super-resolution based on GAN coupled with the total-variation based model. Furthermore, a new adaptive model based on diagonal total variation is proposed to keep texture details.…”
Section: Related Workmentioning
confidence: 99%
“…TV considers only the vertical and horizontal gradient operators, and the diagonal gradients are missed and some directional information of edges and image texture are lost as a consequence. To overcome this limitation, a diagonal total variation (DTV) model was proposed, which considers only the diagonal gradients instead of the vertical and horizontal gradients [43].…”
Section: Jinst 14 P08023mentioning
confidence: 99%
“…Some directional information of edges and image texture are likely to be lost in the conventional TV model due to considering only vertical and horizontal gradient operators. To overcome this problem, a DTV model was introduced [43] for sparse-views CT image reconstruction which includes only the diagonal gradient operators. The DTV model has been defined as follows:…”
Section: Adaptive-weighted Diagonal Tv Modelmentioning
confidence: 99%
“…However, these algorithms solely take into account the vertical and horizontal directions for the gradient operator of the TV term and ignore the diagonal direction. To address this limitation, Deng et al [23] proposed a diagonal TV computation method in 2015 that only considers the diagonal gradient in the model, claiming to reconstruct a better image than the traditional TV method that only considers the vertical and horizontal directions. Additionally, researchers have proposed TV models based on different directions and multi-directions to better recover edges and details lost in minimized TV, including TV models based on 8-and 26-directional gradient operators [24], multi-direction anisotropic total variation (MDATV) [25], and a BDTV model that adaptively computes the orientation information of gradient operators [19].…”
Section: Introductionmentioning
confidence: 99%