2021 21st International Conference on Computational Science and Its Applications (ICCSA) 2021
DOI: 10.1109/iccsa54496.2021.00016
|View full text |Cite
|
Sign up to set email alerts
|

Combining Weighted Total Variation and Deep Image Prior for natural and medical image restoration via ADMM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 40 publications
(22 citation statements)
references
References 19 publications
0
22
0
Order By: Relevance
“…( 8) is similar to the traditional DIP solution by forcing Df θ ðx 0 Þ to approach v k − u k ∕β with the neural network; here the parameter θ is optimized by a gradient descent method. 27,28 In Eqs. ( 9) and ( 10), the second-order derivatives in different directions are separately processed.…”
Section: Methodsmentioning
confidence: 99%
“…( 8) is similar to the traditional DIP solution by forcing Df θ ðx 0 Þ to approach v k − u k ∕β with the neural network; here the parameter θ is optimized by a gradient descent method. 27,28 In Eqs. ( 9) and ( 10), the second-order derivatives in different directions are separately processed.…”
Section: Methodsmentioning
confidence: 99%
“…For the percentage of noise in the biomedical image, a method known in the literature for estimating noise was used, arriving at a percentage of noise just under five percent. As for the qualitative assessment of the images, we relied on a literature paper [7] where the same image is already used, with specific details referenced to evaluate its accuracy.…”
Section: Experiments On the Real Ct Imagementioning
confidence: 99%
“…The chosen Autoencoder [5,7] belongs to the DIP category, and it is prone to semi-convergence. Regarding the choice of the CNN architecture, in particular the autoencoder architecture, in this work we fix it as one of the networks proposed in Ulyanov et al [5], that is an autoencoder with five downsampling and five bilinear upsampling layers with convolutional skip connections.…”
mentioning
confidence: 99%
“…The second term occurring in the minimization is the so-called regularization term, which has the role of considering some characteristics of the desired image and controlling the influence of the noise in the reconstruction. Several options are available for such function: norm for diffuse components [ 6 ], for promoting sparse solution [ 7 , 8 ], Total Variation functional [ 9 , 10 , 11 ] (or its smooth counterpart [ 12 ]) for edge preserving. One may consider a composite regularization function, such as the Elastic–Net [ 13 ], or even non-convex options are available [ 3 , 14 ].…”
Section: Introductionmentioning
confidence: 99%