2013
DOI: 10.1007/s13373-013-0043-1
|View full text |Cite
|
Sign up to set email alerts
|

Applications of classical approximation theory to periodic basis function networks and computational harmonic analysis

Abstract: In this paper, we describe a novel approach to classical approximation theory of periodic univariate and multivariate functions by trigonometric polynomials. While classical wisdom holds that such approximation is too sensitive to the lack of smoothness of the target functions at isolated points, our constructions show how to overcome this problem. We describe applications to approximation by periodic basis function networks, and indicate further research in the direction of Jacobi expansion and approximation … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 62 publications
(85 reference statements)
0
3
0
Order By: Relevance
“…K to guarantee that the estimator we construct is agnostic to the unknown support size S. In practice, u K and v K can be replaced by the efficiently implementable lowpass filtered Chebyshev expansion [19], which achieves the same uniform error rate as the best polynomial approximation. Remark 2.…”
Section: B Construction Of the Optimal Estimatormentioning
confidence: 99%
“…K to guarantee that the estimator we construct is agnostic to the unknown support size S. In practice, u K and v K can be replaced by the efficiently implementable lowpass filtered Chebyshev expansion [19], which achieves the same uniform error rate as the best polynomial approximation. Remark 2.…”
Section: B Construction Of the Optimal Estimatormentioning
confidence: 99%
“…In particular, by noticing that the expansion in Eq. (4) represents an ( n – 1)-th order polynomial expression in A , we can re-interpret the reconstruction algorithm as a high-dimensional polynomial approximation algorithm or a neural network 20,21 . That is, we can generalize Eq.…”
Section: Methodsmentioning
confidence: 99%
“…The sum of these two is given by E((f − P (x)) 2 ), where the expectation is with respect to the marginal distribution of x and the target function f is the conditional expectation of y given x. If the marginal distribution of x is known, then the split between approximation and sampling errors is no longer necessary, and one can obtain estimates as well as constructions directly from characteristics of the training data (e.g., [11,14,18]). In the classical paradigm where this distribution is not known, the approximation error decreases as the number of parameters in P increases to ∞.…”
Section: Introductionmentioning
confidence: 99%