2022
DOI: 10.1109/ojsp.2021.3136488
|View full text |Cite
|
Sign up to set email alerts
|

Learning of Continuous and Piecewise-Linear Functions With Hessian Total-Variation Regularization

Abstract: We develop a novel 2D functional learning framework that employs a sparsity-promoting regularization based on second-order derivatives. Motivated by the nature of the regularizer, we restrict the search space to the span of piecewise-linear box splines shifted on a 2D lattice. Our formulation of the infinite-dimensional problem on this search space allows us to recast it exactly as a finite-dimensional one that can be solved using standard methods in convex optimization. Since our search space is composed of c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 56 publications
0
10
0
Order By: Relevance
“…Hence, HTV regularization allows direct control over the complexity of the learned mapping. This concept was put into practice in [31], albeit in the restricted setting d = 2 (two input variables). The CPWL functions in [31] are parameterized through box splines on a hexagonal lattice.…”
Section: B Variational Learning Of Cpwl Functionsmentioning
confidence: 99%
See 3 more Smart Citations
“…Hence, HTV regularization allows direct control over the complexity of the learned mapping. This concept was put into practice in [31], albeit in the restricted setting d = 2 (two input variables). The CPWL functions in [31] are parameterized through box splines on a hexagonal lattice.…”
Section: B Variational Learning Of Cpwl Functionsmentioning
confidence: 99%
“…This concept was put into practice in [31], albeit in the restricted setting d = 2 (two input variables). The CPWL functions in [31] are parameterized through box splines on a hexagonal lattice. The extension of this uniformly gridded framework to higher dimensions is computationally restricted due to the exponential growth of the number of grid points with the dimension.…”
Section: B Variational Learning Of Cpwl Functionsmentioning
confidence: 99%
See 2 more Smart Citations
“…Throughout this section, we assume that the vertices coincide with the sites of a lattice Λ. This is natural in some applications, such as image processing [41], but can also be a sensible choice in low dimensional learning problems [42]. While a uniform grid constrains the model and thereby reduces its expressivity, it significantly improves the computational performance.…”
Section: Uniform Settingmentioning
confidence: 99%