2006
DOI: 10.1090/s0002-9939-06-08421-8
|View full text |Cite
|
Sign up to set email alerts
|

On generalized hyperinterpolation on the sphere

Abstract: Abstract. It is shown that second-order results can be attained by the generalized hyperinterpolation operators on the sphere, which gives an affirmative answer to a question raised by Reimer in

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 28 publications
(15 citation statements)
references
References 12 publications
0
15
0
Order By: Relevance
“…A central assumption in [45,47], in addition to polynomial exactness, is that a Marcinkiewicz-Zygmund (M-Z) inequality is satisfied. Quadrature rules with positive weights and polynomial exactness automatically satisfy an M-Z inequality (see Dai [18,Theorem 2.1] and Mhaskar [47,Theorem 3.3]). However, neither decomposition of wavelets into needlets nor numerical implementation were studied in [45,47].…”
Section: Norms Of Fourier Local Convolutions and Their Kernelsmentioning
confidence: 99%
“…A central assumption in [45,47], in addition to polynomial exactness, is that a Marcinkiewicz-Zygmund (M-Z) inequality is satisfied. Quadrature rules with positive weights and polynomial exactness automatically satisfy an M-Z inequality (see Dai [18,Theorem 2.1] and Mhaskar [47,Theorem 3.3]). However, neither decomposition of wavelets into needlets nor numerical implementation were studied in [45,47].…”
Section: Norms Of Fourier Local Convolutions and Their Kernelsmentioning
confidence: 99%
“…for s ∈ N is the quadrature weights of the quadrature rule Q Λ ,s = {(w i,s , x i ) : w i,s ≥ 0 and x i ∈ Λ } with 0 ≤ w i,s ≤ c 1 |D| −1 . It is easy to see that the WRLS estimator in (10) is a batch version of DWRLS, which assumes that all data are stored on a single large server and WRLS is capable of handling them. According to Lemma 2, since the matrix-inversion is involved in WRLS, it requires O(|D| 2 ) memory requirements and O(|D| 3 ) float-computations to solve the optimization problem in (10), which is infeasible when the data size is huge even if all the data could be collected without considering the data privacy issue.…”
Section: Approximation Capability Of Wrlsmentioning
confidence: 99%
“…It is easy to see that the WRLS estimator in (10) is a batch version of DWRLS, which assumes that all data are stored on a single large server and WRLS is capable of handling them. According to Lemma 2, since the matrix-inversion is involved in WRLS, it requires O(|D| 2 ) memory requirements and O(|D| 3 ) float-computations to solve the optimization problem in (10), which is infeasible when the data size is huge even if all the data could be collected without considering the data privacy issue. The study of approximation capability of WRLS (10) is necessary, since it enhances the understanding of DWRLS by means of determining which conditions are sufficient to guarantee that distributed learning performs similarly to its batch counterpart.…”
Section: Approximation Capability Of Wrlsmentioning
confidence: 99%
See 2 more Smart Citations