2009 International Joint Conference on Neural Networks 2009
DOI: 10.1109/ijcnn.2009.5178897
|View full text |Cite
|
Sign up to set email alerts
|

Representation and feature selection using multiple kernel learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
18
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(18 citation statements)
references
References 10 publications
0
18
0
Order By: Relevance
“…2, it determines the weight coefficients for each for improved performance while those weights indicate the relevance of the associated features for the learning task. Although the weighted sum of these kernels calculated from individual variable is expected to improve the classification performance, results reported in previous works such as [13] did not achieve significant improvements on several benchmark datasets. Moreover, existing variable selection methods usually regard all variables from the same domain, and the distributions of each variable are assumed to be the same with some data normalization techniques applied.…”
Section: B Mkl For Feature Fusion and Variable Selectionmentioning
confidence: 85%
See 4 more Smart Citations
“…2, it determines the weight coefficients for each for improved performance while those weights indicate the relevance of the associated features for the learning task. Although the weighted sum of these kernels calculated from individual variable is expected to improve the classification performance, results reported in previous works such as [13] did not achieve significant improvements on several benchmark datasets. Moreover, existing variable selection methods usually regard all variables from the same domain, and the distributions of each variable are assumed to be the same with some data normalization techniques applied.…”
Section: B Mkl For Feature Fusion and Variable Selectionmentioning
confidence: 85%
“…A more flexible learning model using multiple kernels instead of one, which is known as multiple kernel learning (MKL), has recently been proposed [12]. Since MKL is able to better represent or discriminate between data using multiple base kernels, it has been shown to improve the performance of many learning tasks, including feature fusion (e.g., [3], [6], [9]) and variable selection (e.g., [13], [14]). …”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations