2022
DOI: 10.5201/ipol.2022.393
|View full text |Cite
|
Sign up to set email alerts
|

Constrained and Unconstrained Inverse Potts Modelling for Joint Image Super-Resolution and Segmentation

Abstract: In this work we consider two methods for joint single-image super-resolution and image partitioning. The proposed approaches rely on a constrained and on an unconstrained version of the inverse Potts model where an 0 regularization prior on the image gradient is used for promoting piecewise constant solutions. For the numerical solution of both models, we provide a unified implementation based on the Alternating Direction Method of Multipliers (ADMM). Upon suitable assumptions on both model operators and on th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…where A ∈ R n×n is a known measurement operator and η ∈ R n is random noise with standard deviation σ η . Linear inverse problems are at the core of many applications [1][2][3][4][5]. However, since most inverse problems are ill-posed, it is common to formulate the solution x * ∈ R n of (1) as a minimizer of a regularized objective function…”
Section: Introductionmentioning
confidence: 99%
“…where A ∈ R n×n is a known measurement operator and η ∈ R n is random noise with standard deviation σ η . Linear inverse problems are at the core of many applications [1][2][3][4][5]. However, since most inverse problems are ill-posed, it is common to formulate the solution x * ∈ R n of (1) as a minimizer of a regularized objective function…”
Section: Introductionmentioning
confidence: 99%
“…For example, well established methods such as the discrepancy principle, L-curve, or cross-validation [16] have been considered for Gaussian noise, while the discrepancy principle has been adapted also in the presence of Poisson noise [17,18]. Alternatively, another interesting approach consists of recasting the optimization problem (2) as a constrained one [19][20][21]: namely, x * ∈ argmin…”
Section: Introductionmentioning
confidence: 99%
“…Another grand challenge for the model-based approach is the design of an effective regularization functional ρ(x) capable of capturing intricate image features. Some examples include the well-known Tikhonov regularization [22], known as ridge regression in statistical contexts, and its variant that promotes diffuse components on the final reconstruction; the total variation functional, which aims to preserve sharp edges [2,23,24]; the ℓ p -norms regularizers, with 0 ≤ p ≤ 1, which induce sparsity on the image and/or gradient domains [19,25,26]; and the elastic-net functional [4], which is a convex combination of ℓ 1 and ℓ 2 norms.…”
Section: Introductionmentioning
confidence: 99%
“…Following recent works [4,12], we will consider in this work a piggy-back primal-dual algorithm computing solutions of the lowerlevel problem and of the adjoint states at the same time. We consider both the problem of image deblurring (A = H ∈ R 𝑛×𝑛 , a structured circulant convolution matrix) and superresolution (A = SH ∈ R 𝑚×𝑛 ) and, in the latter case, we resort to Fourier-based approaches previously proposed in [40] and used, e.g., in [34,31] for computing proximal updates in a closed-form. 2 We consider square images of size √ 𝑛 × √ 𝑛 for simplicity, with…”
Section: Introductionmentioning
confidence: 99%