2017
DOI: 10.1007/978-3-319-69802-1_3
|View full text |Cite
|
Sign up to set email alerts
|

Compressed Sensing Approaches for Polynomial Approximation of High-Dimensional Functions

Abstract: In recent years, the use of sparse recovery techniques in the approximation of high-dimensional functions has garnered increasing interest. In this work we present a survey of recent progress in this emerging topic. Our main focus is on the computation of polynomial approximations of high-dimensional functions on ddimensional hypercubes. We show that smooth, multivariate functions possess expansions in orthogonal polynomial bases that are not only approximately sparse, but possess a particular type of structur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
85
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3

Relationship

3
3

Authors

Journals

citations
Cited by 89 publications
(85 citation statements)
references
References 83 publications
0
85
0
Order By: Relevance
“…This approach is based on three main elements: sparse approximation of the function with respect to orthogonal polynomials, random pointwise sampling, and sparse recovery via a (weighted) ℓ 1 minimization decoder. Combining these three ingredients, it is possible to construct approximations using a number of pointwise evaluations that depends only logarithmically on d [1,4,23]. This feature is particularly appealing in uncertainty quantification applications, where the function to approximate is a quantity of interest of a parametric PDE [48,67].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This approach is based on three main elements: sparse approximation of the function with respect to orthogonal polynomials, random pointwise sampling, and sparse recovery via a (weighted) ℓ 1 minimization decoder. Combining these three ingredients, it is possible to construct approximations using a number of pointwise evaluations that depends only logarithmically on d [1,4,23]. This feature is particularly appealing in uncertainty quantification applications, where the function to approximate is a quantity of interest of a parametric PDE [48,67].…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, if the vector of data is y = y exact + e, where y exact is the error-free data and e is the vector of errors, then one assumes the bound e 2 ≤ η, (1) for some known η > 0. In this case, sparse regularization performed using the (weighted) quadraticallyconstrained basis pursuit decoder admits rigorous theoretical recovery guarantees [1,4,23,51,66]. In practice, however, a bound of the form (1) is usually unknown, since the sources of error (i), (ii), and (iii) are function dependent.…”
Section: Introductionmentioning
confidence: 99%
“…Similar to those of [2], the results of this section are nonuniform recovery guarantees: they ensure recovery of a single f from a random draw of sample points. For the unaugmented case, uniform recovery guarantees for Chebyshev and Legendre polynomials (with µ = ν) have been proved in [5,13]. The corresponding sample complexity estimates are similar to (4.11), except with higher log factors.…”
Section: Discussionmentioning
confidence: 82%
“…In tandem with these results, a series of works [2,5,13] have shown that quasi-best s-term approximations in lower sets can be obtained by solving the weighted ℓ 1 minimization problem (2.8) with a suitable choice of weights. Since the union of all lower sets of size s is precisely the hyperbolic cross index set {∆ : |∆| ≤ s, ∆ lower} = Λ HC s , (2.10)…”
Section: Lower Setsmentioning
confidence: 88%
See 1 more Smart Citation