2014
DOI: 10.1137/13094596x
|View full text |Cite
|
Sign up to set email alerts
|

On Sparse Interpolation and the Design of Deterministic Interpolation Points

Abstract: In this paper, we build up a framework for sparse interpolation. We first investigate the theoretical limit of the number of unisolvent points for sparse interpolation under a general setting and try to answer some basic questions of this topic. We also explore the relation between classical interpolation and sparse interpolation. We second consider the design of the interpolation points for the s-sparse functions in high dimensional Chebyshev bases, for which the possible applications include uncertainty quan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
31
0
1

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
4

Relationship

3
7

Authors

Journals

citations
Cited by 40 publications
(32 citation statements)
references
References 32 publications
0
31
0
1
Order By: Relevance
“…I.e., ρ is the uniform measure on Γ = [−1, 1] d , the PCE basis functions ψ j are tensor-product Legendre polynomials, and Ξ is constructed via iid sampling from the Chebyshev (arcsine) measure. The use of Chebyshev sampling when approximating with a Legendre polynomial basis (where available data is only function values) has been widely investigated [37,49,24], and can produce better results (compared to uniform sampling) when large-degree approximations are required. Here we shall show how inclusion of gradient information can be accomplished in a systematic way.…”
Section: A Gradient Enhanced Compressed Sensing Approachmentioning
confidence: 99%
“…I.e., ρ is the uniform measure on Γ = [−1, 1] d , the PCE basis functions ψ j are tensor-product Legendre polynomials, and Ξ is constructed via iid sampling from the Chebyshev (arcsine) measure. The use of Chebyshev sampling when approximating with a Legendre polynomial basis (where available data is only function values) has been widely investigated [37,49,24], and can produce better results (compared to uniform sampling) when large-degree approximations are required. Here we shall show how inclusion of gradient information can be accomplished in a systematic way.…”
Section: A Gradient Enhanced Compressed Sensing Approachmentioning
confidence: 99%
“…The focus of this paper is on determinstic sampling schemes; such methods have also been investigated [21,37,7]. We remark again that polynomials grids constructed via Fekete or Leja methods are closely connected to our procedure [5,4,1].…”
Section: Introductionmentioning
confidence: 99%
“…One of the popular stochastic methodologies is generalized polynomial chaos (gPC) [3], which is an extension of the standard polynomial chaos method [4]. In addition, advanced gPC algorithms for high-dimensional stochastic problems have been developed [5][6][7][8][9][10][11][12][13][14][15].…”
Section: Introductionmentioning
confidence: 99%