2015
DOI: 10.1016/j.neucom.2014.11.078
|View full text |Cite
|
Sign up to set email alerts
|

EasyMKL: a scalable multiple kernel learning algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
121
1

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 151 publications
(132 citation statements)
references
References 7 publications
1
121
1
Order By: Relevance
“…EasyMKL (Aiolli and Donini 2015) is a recent MKL algorithm able to combine sets of base kernels by solving a simple quadratic problem. Besides its proved empirical effectiveness, a clear advantage of EasyMKL compared to other MKL methods is its high scalability with respect to the number of kernels to be combined.…”
Section: Easymklmentioning
confidence: 99%
See 2 more Smart Citations
“…EasyMKL (Aiolli and Donini 2015) is a recent MKL algorithm able to combine sets of base kernels by solving a simple quadratic problem. Besides its proved empirical effectiveness, a clear advantage of EasyMKL compared to other MKL methods is its high scalability with respect to the number of kernels to be combined.…”
Section: Easymklmentioning
confidence: 99%
“…The problem above is a min-max problem that can be reduced to a simple quadratic problem with a technical derivation described in Aiolli and Donini (2015). Specifically, let γ * be the unique solution of the following quadratic optimization problem:…”
Section: Easymklmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, the inherent kernel trick of combining linear kernels and nonlinear kernels in MKL makes it more promising in solving fusing information problems. ere is a significant amount of work in the literature for combining multiple kernels [15,16]. Various applications indicate that performance gains can be achieved by linear and nonlinear kernel combinations using MKL methods [17][18][19].…”
Section: Introductionmentioning
confidence: 99%
“…They also use some kernel functions that can solve a number of problems, which are not linearly separable. These approaches are time‐consuming (Aiolli & Donini, ), particularly in incremental learning, because not all data are available at the beginning. Research on new kernel is one of the open problems for this challenge (Hooshmand Moghaddam & Hamidzadeh, ).…”
Section: Introductionmentioning
confidence: 99%