2012
DOI: 10.1155/2012/915920
|View full text |Cite
|
Sign up to set email alerts
|

Error Bounds for lp‐Norm Multiple Kernel Learning with Least Square Loss

Abstract: The problem of learning the kernel function with linear combinations of multiple kernels has attracted considerable attention recently in machine learning. Specially, by imposing anlp-norm penalty on the kernel combination coefficient, multiple kernel learning (MKL) was proved useful and effective for theoretical analysis and practical applications (Kloft et al., 2009, 2011). In this paper, we present a theoretical analysis on the approximation error and learning ability of thelp-norm MKL. Our analysis shows e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
4
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 22 publications
1
4
0
Order By: Relevance
“…Remark 1: (i) From Theorem 2, we have that when β = 1 and s tend to 0, R(sign(π(f z ))) − R(f c ) is arbitrarily close to O(n −1 ), which is same as that obtained in [23], [24] and [39] for the case of randomly independent samples. This means that the leaning rate obtained in Theorem 2 is optimal for the u.e.M.c.…”
Section: Theoretical Analysis Of Mksvm-tslsupporting
confidence: 61%
See 2 more Smart Citations
“…Remark 1: (i) From Theorem 2, we have that when β = 1 and s tend to 0, R(sign(π(f z ))) − R(f c ) is arbitrarily close to O(n −1 ), which is same as that obtained in [23], [24] and [39] for the case of randomly independent samples. This means that the leaning rate obtained in Theorem 2 is optimal for the u.e.M.c.…”
Section: Theoretical Analysis Of Mksvm-tslsupporting
confidence: 61%
“…samples. In other words, Theorem 2 extended the previously known works on MKSVM algorithm (2) from randomly independent samples [23], [24], [39] to u.e.M.c. samples.…”
Section: Theoretical Analysis Of Mksvm-tslmentioning
confidence: 65%
See 1 more Smart Citation
“…However, these multiple kernel learning algorithms do not have better performance than traditional nonweighted kernel = ∑ in SVM sometimes, and Cortes [8] questioned that "can learning kernels help performance?". Recently Kloft and Blanchard [9] introduced a multikernel learning withnorm ( ≥ 1) approach, which has been shown effective in both theory and practice [10,11]. Essentially, -norm is a kind of minimizing empirical risk algorithm with kernel candidate set { = ∑ =1 | ‖ ‖ ≤ 1, ≥ 0}.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, there is an increasing research interest in learning with abstract functional spaces, and considerable work has been done in [1][2][3] and so on.…”
Section: Introductionmentioning
confidence: 99%