2018
DOI: 10.1016/j.neucom.2018.02.009
|View full text |Cite
|
Sign up to set email alerts
|

Total stability of kernel methods

Abstract: Regularized empirical risk minimization using kernels and their corresponding reproducing kernel Hilbert spaces (RKHSs) plays an important role in machine learning. However, the actually used kernel often depends on one or on a few hyperparameters or the kernel is even data dependent in a much more complicated manner. Examples are Gaussian RBF kernels, kernel learning, and hierarchical Gaussian kernels which were recently proposed for deep learning. Therefore, the actually used kernel is often computed by a gr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(13 citation statements)
references
References 25 publications
0
13
0
Order By: Relevance
“…We will now state our first main result about the stability of SVMs. As mentioned in the introduction, this is a generalization of a result (Theorem 2.7) by Christmann et al (2018): First of all, we eliminated an additional condition on L that was required by Christmann et al (2018). Previously, L did not only need to be convex and Lipschitz continuous but also differentiable.…”
Section: Total Stability Of Svmsmentioning
confidence: 99%
See 4 more Smart Citations
“…We will now state our first main result about the stability of SVMs. As mentioned in the introduction, this is a generalization of a result (Theorem 2.7) by Christmann et al (2018): First of all, we eliminated an additional condition on L that was required by Christmann et al (2018). Previously, L did not only need to be convex and Lipschitz continuous but also differentiable.…”
Section: Total Stability Of Svmsmentioning
confidence: 99%
“…In this section, we will show stability of SVMs with respect to slight changes in the triple (P, λ, k) consisting of probability measure, regularization parameter and kernel. Our notion of stability will be similar to that of (4), with the slight difference that we additionally need to consider ||k 1 − k 2 || ∞ as an exchange for the result considerably generalizing the referenced theorem by Christmann et al (2018), that is, the result being applicable to arbitrary positive regularization parameters and a larger class of loss functions. Thus, it will be of the type…”
Section: Total Stability Of Svmsmentioning
confidence: 99%
See 3 more Smart Citations