2017
DOI: 10.1016/j.patcog.2017.05.005
|View full text |Cite
|
Sign up to set email alerts
|

Multiple kernel learning with hybrid kernel alignment maximization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
13
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 29 publications
(14 citation statements)
references
References 5 publications
0
13
0
1
Order By: Relevance
“…This optimization can be considered as a different mathematical model for obtaining the kernel parameters and their combination. Wang et al [28] noted that there are two types of multiple kernel learning methods including one-stage and two-stage methods. Actually, they correspond to the multiple learning methods dependent and independent of the classifier, respectively.…”
Section: Related Workmentioning
confidence: 99%
“…This optimization can be considered as a different mathematical model for obtaining the kernel parameters and their combination. Wang et al [28] noted that there are two types of multiple kernel learning methods including one-stage and two-stage methods. Actually, they correspond to the multiple learning methods dependent and independent of the classifier, respectively.…”
Section: Related Workmentioning
confidence: 99%
“…There is no perfect theoretical basis for the construction or selection of kernel function. In order to solve these problems, a lot of multiple kernel learning (MKL) methods using kernel combinations were proposed [5][6][7]. Gönen et al gave a taxonomy of multiple kernel learning algorithms and reviewed them in detail [8].…”
Section: Introductionmentioning
confidence: 99%
“…, e is the base of the natural logarithm, and m is the number of elements of the base kernel function set K. The generalization bound can be summarized from inequations (3) to (5). The local Lipschitz constant C φ λ , M φ λ is estimated according to Equations (6) and (7), where φ is the loss function, and λ is the regularization parameter of a two-layer minimization problem:…”
mentioning
confidence: 99%
“…It requires prior knowledge of the data distribution in the feature space. However, when the data distribution is unknown or the distribution is complex, multiple kernel (MK) functions are combined to construct the feature space and strengthen the learning ability of the model [28][29][30][31]. Therefore, the problem is reduced to a search for approximate parameters with respect to a given dataset.…”
Section: Introductionmentioning
confidence: 99%