2020
DOI: 10.48550/arxiv.2009.10683
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS

Abstract: We prove that the reproducing kernel Hilbert spaces (RKHS) of a deep neural tangent kernel and the Laplace kernel include the same set of functions, when both kernels are restricted to the sphere S d−1 . Additionally, we prove that the exponential power kernel with a smaller power (making the kernel more non-smooth) leads to a larger RKHS, when it is restricted to the sphere S d−1 and when it is defined on the entire R d .

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
21
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(25 citation statements)
references
References 17 publications
4
21
0
Order By: Relevance
“…Although our analysis only concerns overparametrized one-hiddenlayer ReLU neural networks, it can potentially apply to other types of neural networks in the NTK regime. Recently, it has been shown that overparametrized multi-layer networks correspond to the Laplace kernel [35,36]. As long as the trained neural networks can approximate the classifier induced by the NTK, our results can be naturally extended.…”
Section: Robustness and Calibration Errormentioning
confidence: 69%
See 1 more Smart Citation
“…Although our analysis only concerns overparametrized one-hiddenlayer ReLU neural networks, it can potentially apply to other types of neural networks in the NTK regime. Recently, it has been shown that overparametrized multi-layer networks correspond to the Laplace kernel [35,36]. As long as the trained neural networks can approximate the classifier induced by the NTK, our results can be naturally extended.…”
Section: Robustness and Calibration Errormentioning
confidence: 69%
“…Proof of Theorem 3.2. By the equivalence of the RKHS generated by the Laplace kernel and N [35,36], it can be shown that N can be embedded into the Sobolev space W ν 2 for some ν > d/2. Consider the Hölder space C 0,α b for 0 < α ≤ 1 equipped with the norm…”
Section: E2 Proof Of Theorem 32mentioning
confidence: 99%
“…The proof of this result can be found in [6]. The similarity between neural tangent kernel and the Laplace kernel was first documented in [15], and a rigorous theory was developed in [10].…”
Section: A1 Proofs Of Lemma A1-lemma A3mentioning
confidence: 91%
“…For the theorem to work for learning with neural networks as we discussed in Section 5, it has been shown that the neural tangent kernel also satisfies the required property in different settings. The main argument is that neural tangent kernel is equivalent to kernel regression with the Laplace kernel as proved in [10]. We cite the following result for the two-layer neural network model.…”
Section: A1 Proofs Of Lemma A1-lemma A3mentioning
confidence: 98%
“…For the special case of rectified linear unit (ReLU) activation function, a 1 (. ), it was shown that λ m ∼ m − d d−1 , and consequently the RKHS of the corresponding NT kernel is equivalent to that of the Laplace kernel (that is a special Matérn kernel with ν = 1 2 ) (Geifman et al, 2020;Chen & Xu, 2020). Our contribution includes generalizing this result by showing the equivalence between the RKHS of the NT kernel, under various smoothness of the activation functions, and that of a Matérn kernel with the corresponding smoothness (see Section 3.3).…”
Section: Contributionsmentioning
confidence: 99%