2020
DOI: 10.1109/access.2020.2998772
|View full text |Cite
|
Sign up to set email alerts
|

Multiple Kernel SVM Based on Two-Stage Learning

Abstract: In this paper we introduce the idea of two-stage learning for multiple kernel SVM (MKSVM) and present a new MKSVM algorithm based on two-stage learning (MKSVM-TSL). The first stage is the pre-learning and its aim is to obtain the information of data such that the "important" samples for classification can be generated in the formal learning stage and these samples are uniformly ergodic Markov chain (u.e.M.c.). To study comprehensively the proposed MKSVM-TSL algorithm, we estimate the generalization bound of MK… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 36 publications
0
1
0
Order By: Relevance
“…The reasons for focusing on multiple kernel learning (MKL) methods are threefold: the MKL approach is essentially a dynamic ensemble method that constructs a mixture kernel, thereby encoding complementary information. Secondly, it is known that MKL methods are appropriate for small but wide datasets [40,41], as is the case with our high-dimensional generative embeddings extracted from the HMM. The MKL approach involves integration of different kernels, such as polynomials and RBF helps MKL to capture nonlinear decision boundaries even with limited data, and the kernel-based regularization makes it less likely it will overfit [42].…”
Section: Multiple Kernel Learningmentioning
confidence: 99%
“…The reasons for focusing on multiple kernel learning (MKL) methods are threefold: the MKL approach is essentially a dynamic ensemble method that constructs a mixture kernel, thereby encoding complementary information. Secondly, it is known that MKL methods are appropriate for small but wide datasets [40,41], as is the case with our high-dimensional generative embeddings extracted from the HMM. The MKL approach involves integration of different kernels, such as polynomials and RBF helps MKL to capture nonlinear decision boundaries even with limited data, and the kernel-based regularization makes it less likely it will overfit [42].…”
Section: Multiple Kernel Learningmentioning
confidence: 99%