2020
DOI: 10.1109/tsp.2020.2976577
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Multiresolution Representations With Adaptive Kernels

Abstract: Reproducing kernel Hilbert spaces (RKHSs) are key elements of many non-parametric tools successfully used in signal processing, statistics, and machine learning. In this work, we aim to address three issues of the classical RKHSbased techniques. First, they require the RKHS to be known a priori, which is unrealistic in many applications. Furthermore, the choice of RKHS affects the shape and smoothness of the solution, thus impacting its performance. Second, RKHSs are illequipped to deal with heterogeneous degr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(11 citation statements)
references
References 46 publications
0
11
0
Order By: Relevance
“…The next result shows that under mild condition on the distributions D i , this equality holds even if the i are non-convex. We note that, besides being the first step in the construction of Theorem 1, this result has also been used in other contexts (see, e.g., [46], [53], [54]).…”
Section: A the Duality Gapmentioning
confidence: 94%

Constrained Learning with Non-Convex Losses

Chamon,
Paternain,
Calvo-Fullana
et al. 2021
Preprint
Self Cite
“…The next result shows that under mild condition on the distributions D i , this equality holds even if the i are non-convex. We note that, besides being the first step in the construction of Theorem 1, this result has also been used in other contexts (see, e.g., [46], [53], [54]).…”
Section: A the Duality Gapmentioning
confidence: 94%

Constrained Learning with Non-Convex Losses

Chamon,
Paternain,
Calvo-Fullana
et al. 2021
Preprint
Self Cite
“…The class function C and the complexity measure ρ(f ) constrain the variability of f and dictate how it generalizes to unobserved samples x. In this paper we adopt the use of low complexity kernel representations as introduced in [25].…”
Section: Learning Low Complexity Rkhs Representationsmentioning
confidence: 99%
“…We are therefore searching for a function f that can be specified by as few kernels as possible while still passing within of all the elements of the training set. When we search for kernels to add to the representation, the search is over kernel centers s ∈ S and kernel parameters w ∈ W. The latter allows, e.g., a search over kernel widths -see [25] for details.…”
Section: Learning Low Complexity Rkhs Representationsmentioning
confidence: 99%
See 2 more Smart Citations