2014
DOI: 10.1007/978-3-319-08795-5_22
|View full text |Cite
|
Sign up to set email alerts
|

Efficient and Scalable Nonlinear Multiple Kernel Aggregation Using the Choquet Integral

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2015
2015
2018
2018

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 9 publications
0
10
0
Order By: Relevance
“…Multiple kernel learning (MKL) is a way to learn the fusion of multiple known Mercer kernels (the building blocks) to identify a superior kernel. In Pinar et al (2015Pinar et al ( , 2016, Hu et al (2013Hu et al ( , 2014, a genetic algorithm (GA) based ' p -norm linear convex sum of kernels called GAMKLp for feature-level fusion was proposed. In Pinar et al (2015), the nonlinear fusion of kernels was also explored.…”
Section: Experiments 4: Multiple Kernel Learningmentioning
confidence: 99%
“…Multiple kernel learning (MKL) is a way to learn the fusion of multiple known Mercer kernels (the building blocks) to identify a superior kernel. In Pinar et al (2015Pinar et al ( , 2016, Hu et al (2013Hu et al ( , 2014, a genetic algorithm (GA) based ' p -norm linear convex sum of kernels called GAMKLp for feature-level fusion was proposed. In Pinar et al (2015), the nonlinear fusion of kernels was also explored.…”
Section: Experiments 4: Multiple Kernel Learningmentioning
confidence: 99%
“…While manual specification of the FM works for small sets of sources (there are already 16 possible combinations of sources in the power set of 4 sources), manually specifying the values of the FM for large collections of sources is virtually impossible. Thus, automatic methods have been proposed, such as the Sugeno λmeasure [39] and the S-decomposable measure [47], which build the measure from the densities (the worth of individual sources), and genetic algorithm [11,12,38,48], Gibbs sampling [49] and other learning methods [16,50,51], which build the measure by using training data. Other works [52,53,54] have proposed learning FMs that reflect trends in the data and have been specifically applied to crowd-sourcing, where the worth of individuals is not known, but extracted from the data.…”
Section: Fuzzy Measures and Fuzzy Integralsmentioning
confidence: 99%
“…In[12], an additional gene was added to indicate different types of FMs and a slightly better performance was noted.…”
mentioning
confidence: 99%
“…Furthermore, since kernels known to exploit the data's various features can be used as building blocks for MKL, it can do very well with heterogeneous data. There are many works that discuss MKL [8,9,10,11,12,13,14], and nearly all of them rely on operations that aggregate kernels in ways that preserve symmetry and positive semi-definiteness, such as element-wise addition and multiplication. Most MKL algorithms learn a "best" kernel space in which to classify by learning respective weights on each component kernel.…”
Section: Introductionmentioning
confidence: 99%
“…Two MKL formulations explored in this chapter focus on aggregation using the Choquet fuzzy integral (FI) with respect to a fuzzy measure (FM) [15]. First, we investigate our previously proposed fuzzy integral: genetic algorithm (FIGA) approach to MKL [11,12], proving that it reduces to a special kind of linear convex sum (LCS) kernel aggregation. This leads to the proposition of the p-norm genetic algorithm MKL (GAMKL p ) approach, which learns an MKL classifier using a genetic algorithm and generalized p-norm weight domain.…”
Section: Introductionmentioning
confidence: 99%