2022
DOI: 10.1007/s11760-021-02111-0
|View full text |Cite
|
Sign up to set email alerts
|

A new cartoon + texture image decomposition model based on the Sobolev space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
4
0
1

Year Published

2023
2023
2025
2025

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 34 publications
0
4
0
1
Order By: Relevance
“…To ensure a systematic comparison, we elaborate on our semi-sparsity model with its superior performance against a series of image decomposition methods. Specifically, we compare the model with the well-known ROF model [10], TV-L 1 [56], TV-G [3], TV-H [15], TV-G-H [57], BTF [38], RTV [5], TGV-L 1 [27], TGV-H [32], HTV-H [32], which cover most of the structural and textual models listed in Tab. 1 and Tab.…”
Section: Dataset and Compared Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…To ensure a systematic comparison, we elaborate on our semi-sparsity model with its superior performance against a series of image decomposition methods. Specifically, we compare the model with the well-known ROF model [10], TV-L 1 [56], TV-G [3], TV-H [15], TV-G-H [57], BTF [38], RTV [5], TGV-L 1 [27], TGV-H [32], HTV-H [32], which cover most of the structural and textual models listed in Tab. 1 and Tab.…”
Section: Dataset and Compared Methodsmentioning
confidence: 99%
“…The notation ∥•∥ 0 is the so-called L 0 quasi-norm denoting the number of non-zero entries of a vector, which provides a simple and easily-grasped measurement of sparsity. [10], (c) TV-L 1 (λ = 0.003, α = 0.0045) [56], (d) TV-G (λ = 0.003, α = 0.0005, γ = 0.0002) [3], (e) TV-H (λ = 0.002, α = 0.005) [15], (f) TV-G-H (λ = 0.004, α = 0.001, γ = 0.002) [57], (g) BTF (σ = 5.0, iter = 4) [23], (h) RTV (λ = 0.01, σ = 3.0) [5], (i) TGV-L 1 (λ = 0.0007, α = 0.0008, β = 0.0008) [27], (j) TGV-H (λ = 0.004, α = 0.01, β = 0.03) [32], (k) HTV-H (λ = 0.003, α = 0.006, β = 0.0015) [32], and (l) Ours (λ = 0.005, α = 0.006, β = 0.001). For fairness, all methods are fine-tuned for a similar level of smoothness.…”
Section: (Semi)-sparsity Inducing Regularizationmentioning
confidence: 99%
See 3 more Smart Citations