2018
DOI: 10.1109/tnnls.2016.2635151
|View full text |Cite
|
Sign up to set email alerts
|

Localized Multiple Kernel Learning With Dynamical Clustering and Matrix Regularization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(7 citation statements)
references
References 26 publications
0
7
0
Order By: Relevance
“…In contrary, the more commonly used vector p -norm regularisation [29] lacks an explicit mechanism to capture inter-kernel interactions. As such, the matrix-norm regularisation has been observed to outperform its vector-norm counterpart in different settings [8], [37], [34], [38]. Note that the vector-norm constraint may be considered as a special case of the matrix-norm constraint as setting p = q would reduce the matrix-norm into a vector-norm.…”
Section: Preliminariesmentioning
confidence: 99%
See 1 more Smart Citation
“…In contrary, the more commonly used vector p -norm regularisation [29] lacks an explicit mechanism to capture inter-kernel interactions. As such, the matrix-norm regularisation has been observed to outperform its vector-norm counterpart in different settings [8], [37], [34], [38]. Note that the vector-norm constraint may be considered as a special case of the matrix-norm constraint as setting p = q would reduce the matrix-norm into a vector-norm.…”
Section: Preliminariesmentioning
confidence: 99%
“…First, the MKL problem is formulated in the presence of an p -norm constraint on kernel weights. More recent studies have demonstrated that a more general matrix p,q -norm regularisation on kernel weights may yield better performance in different applications settings [8], [37], [34], [38]. Second, the localised MKL method in [36] decouples the learning tasks into a set of disjoint MKL problems each associated with a distinct region in the data.…”
Section: Introductionmentioning
confidence: 99%
“…In the context of multiple kernel learning (MKL), while the vast majority of existing algorithms are global in nature, meaning the kernel weights are shared across the whole observation space, the issue was initially identified in [30] and has received considerable attention since then [31], [32], [33], [34], [35]. However, existing localised multiple kernel learning methods use non-convex objective functions which raises questions with regards to their generalisation capacities.…”
Section: Introductionmentioning
confidence: 99%
“…Multiple kernel clustering (MKC) aims to extract complementary information from multiple pre-specified kernels and then categorize the data with close patterns or structures into the same cluster (Zhao, Kwok, and Zhang 2009;Kloft, Rückert, and Bartlett 2010;Kloft et al 2011;Yu et al 2011;Huang, Chuang, and Chen 2011;Gönen and Alpaydın 2011;Zhou et al 2015;Han et al 2016;Wang et al 2017b;Zhou et al 2021a;Liu et al 2021a). Due to the ability to mining inherent non-linear information, MKC has been intensively researched and commonly applied to various applications (Liu et al 2016;Liu et al 2017;Bhadra, Kaski, and Rousu 2017;Liu et al 2019;Zhou et al 2019Zhou et al , 2021b.…”
Section: Introductionmentioning
confidence: 99%