2014
DOI: 10.1155/2014/790547
|View full text |Cite
|
Sign up to set email alerts
|

TV+TV2Regularization with Nonconvex Sparseness-Inducing Penalty for Image Restoration

Abstract: In order to restore the high quality image, we propose a compound regularization method which combines a new higher-order extension of total variation (TV+TV2) and a nonconvex sparseness-inducing penalty. Considering the presence of varying directional features in images, we employ the shearlet transform to preserve the abundant geometrical information of the image. The nonconvex sparseness-inducing penalty approach increases robustness to noise and image nonsparsity. In what follows, we present the numerical … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 37 publications
0
5
0
Order By: Relevance
“…During the training of the main network, the FA calculation branch calculates FA maps using the super-resolution outputs and the high-resolution targets, respectively. The differences of the FA results are termed as the FA loss: For the loss function design, we combine the structural consistency index (SSIM) [48], pixel-wise difference (ℒ 1 ), image edge preservation (Gradient loss and gradient map loss) [49], the proposed FA loss and the cross-entropy GAN loss (ℒ GAN ). The loss function can be written as Here ∇ refers to gradient operator, and ℒ GAN refers to: …”
Section: Methodsmentioning
confidence: 99%
“…During the training of the main network, the FA calculation branch calculates FA maps using the super-resolution outputs and the high-resolution targets, respectively. The differences of the FA results are termed as the FA loss: For the loss function design, we combine the structural consistency index (SSIM) [48], pixel-wise difference (ℒ 1 ), image edge preservation (Gradient loss and gradient map loss) [49], the proposed FA loss and the cross-entropy GAN loss (ℒ GAN ). The loss function can be written as Here ∇ refers to gradient operator, and ℒ GAN refers to: …”
Section: Methodsmentioning
confidence: 99%
“…No additional enhancements were made to Faceswap, so the cost function proves that the result is based only on image reconstruction. The developers implemented such cost functions as L1 (mean absolute error), L2 (mean squared error), Logcosh, generalized loss [42], L-inf norm, DSSIM (difference of structural similarity), GMSDLoss (gradient magnitude similarity deviation loss) [43], and GradientLoss [44].…”
Section: Generatormentioning
confidence: 99%
“…The combination of two or more sparsifying transforms is conveniently implemented using analysis based reconstruction algorithms, such as the Manuscript Split Bregman Reconstruction Algorithm (SBRA) [6]- [8] and the Alternating Direction of Multipliers algorithms [9]. The use of multiple transforms leads to a number of regularization parameters and increases the algorithm complexity.…”
Section: Introductionmentioning
confidence: 99%