2016
DOI: 10.1007/s10994-016-5590-8
|View full text |Cite
|
Sign up to set email alerts
|

Learning deep kernels in the space of dot product polynomials

Abstract: Recent literature has shown the merits of having deep representations in the context of neural networks. An emerging challenge in kernel learning is the definition of similar deep representations. In this paper, we propose a general methodology to define a hierarchy of base kernels with increasing expressiveness and combine them via multiple kernel learning (MKL) with the aim to generate overall deeper kernels. As a leading example, this methodology is applied to learning the kernel in the space of Dot-Product… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
32
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 23 publications
(32 citation statements)
references
References 13 publications
0
32
0
Order By: Relevance
“…The expressiveness of a kernel-based model is related to the number of dichotomies achievable by a linear separator in the feature space. Moreover, concerning the rank of the kernel matrix, we have the following result [13,Theorem 2,p. 7].…”
Section: Remarkmentioning
confidence: 92%
See 1 more Smart Citation
“…The expressiveness of a kernel-based model is related to the number of dichotomies achievable by a linear separator in the feature space. Moreover, concerning the rank of the kernel matrix, we have the following result [13,Theorem 2,p. 7].…”
Section: Remarkmentioning
confidence: 92%
“…In doing so, we drive our attention towards the Gaussian and linear kernels, being truly popular for learning issues. We provide theoretical results concerning their practical implementation and expressiveness [13] and we further analyze the spectrum of the kernel matrices constructed via VSKs. This study reveals the effectiveness of the proposed approach especially for the Gaussian kernel; indeed, the condition number of the VSK kernel matrix is less than or equal to the condition number of the matrix constructed via the standard kernel.…”
Section: Introductionmentioning
confidence: 99%
“…The expressiveness of a kernel function can be roughly thought as the number of dichotomies that can be realized by a linear separator in that feature space. As discussed in [14], the kernel's expressiveness can be captured by its rank since, it can be proved that, let K ∈ R m×m be a kernel matrix over a set of m examples, and let rank(K) be its rank, then there exists at least one subset of examples of size rank(K) that can be shattered by a linear function.…”
Section: Expressiveness Of D-kernel and C-kernelmentioning
confidence: 99%
“…We will use this measure to assess the complexity of the C-Kernel and the D-Kernel. Using the definition given in [14], we say that a kernel function κ i is more general than another kernel function κ j when C(K (i)…”
Section: Expressiveness Of D-kernel and C-kernelmentioning
confidence: 99%
“…In sociology the related Simpson index quantifies diversity [37], while in politics it is a measure of the effective number of parties [38]. In machine learning the same quantity serves as a metric of expressivity for learning kernels [39], and in neuroscience, this quantity is used to measure the dimensionality of neural activity [1,40,41]. The ubiquitous use of the Participation Ratio to measure the effective dimension of a complex dynamical system underscores its efficacy.…”
mentioning
confidence: 99%