2009
DOI: 10.1007/s00365-009-9080-0
|View full text |Cite
|
Sign up to set email alerts
|

Some Properties of Gaussian Reproducing Kernel Hilbert Spaces and Their Implications for Function Approximation and Learning Theory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
63
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 96 publications
(63 citation statements)
references
References 15 publications
0
63
0
Order By: Relevance
“…Mihn [26] also developed a general theory of reproducing kernel Hilbert space on the sphere and advocated to utilize the kernel methods to tackle spherical data. However, for some popular kernels such as the Gaussian [27] and polynomials [5], kernel methods suffer from either a similar problem as the localization methods, or a similar drawback as the orthogonal series methods. In fact, it remains open that whether there is an exclusive kernel for spherical data such that both the manifold structure of the sphere and the localization requirement are sufficiently considered.…”
Section: Introductionmentioning
confidence: 99%
“…Mihn [26] also developed a general theory of reproducing kernel Hilbert space on the sphere and advocated to utilize the kernel methods to tackle spherical data. However, for some popular kernels such as the Gaussian [27] and polynomials [5], kernel methods suffer from either a similar problem as the localization methods, or a similar drawback as the orthogonal series methods. In fact, it remains open that whether there is an exclusive kernel for spherical data such that both the manifold structure of the sphere and the localization requirement are sufficiently considered.…”
Section: Introductionmentioning
confidence: 99%
“…are an extended Chebyshev system, Theorem 3.1 guarantees the existence of a unique N -point quadrature rule for q > 0 and weight parameters ωα > 0 such that the series converges for any ℓ > 0 and x, x ′ ∈ Ω. Arguments identical to those used in [32,19] establish that k dpow ℓ is a positive-definite kernel and that its RKHS H ′ ℓ consists of functions…”
Section: Optimal Points In One Dimensionmentioning
confidence: 87%
“…However, for our setting, given that the kernels we use are continuous and f ρ is often not (say when η(x) = η < 1 2 ), the convergence of f z,λ − f ρ ρ is either impossible or very slow (we discuss this in detail in [14]). …”
Section: Proof Methodologymentioning
confidence: 99%
“…To prove Theorem 3, we need the following law of large numbers for Hilbert space-valued random variables, which is a consequence of a general inequality due to Pinelis [15]. …”
Section: The Average Algorithmmentioning
confidence: 99%