2019
DOI: 10.1007/s00365-019-09467-0
|View full text |Cite
|
Sign up to set email alerts
|

Compressive Hermite Interpolation: Sparse, High-Dimensional Approximation from Gradient-Augmented Measurements

Abstract: We consider the sparse polynomial approximation of a multivariate function on a tensor product domain from samples of both the function and its gradient. When only function samples are prescribed, weighted ℓ 1 minimization has recently been shown to be an effective procedure for computing such approximations. We extend this work to the gradient-augmented case.Our main results show that for the same asymptotic sample complexity, gradient-augmented measurements achieve an approximation error bound in a stronger … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 49 publications
(8 citation statements)
references
References 37 publications
0
8
0
Order By: Relevance
“…An example is the discrete-inspace-continuous-in-time model, which can occur when sensors in physical space take continuous recordings a time-dependent function f (y,t). Another problem, which arises commonly in uncertainty quantification (see [13,44,76] and references therein), is the problem where one measures both the function f (y) and its gradient ∇ f (y) simultaneously at a sample point y. We anticipate that many of the key results of this work can be extended to substantially more general sampling models.…”
Section: Conclusion and Challengesmentioning
confidence: 95%
See 2 more Smart Citations
“…An example is the discrete-inspace-continuous-in-time model, which can occur when sensors in physical space take continuous recordings a time-dependent function f (y,t). Another problem, which arises commonly in uncertainty quantification (see [13,44,76] and references therein), is the problem where one measures both the function f (y) and its gradient ∇ f (y) simultaneously at a sample point y. We anticipate that many of the key results of this work can be extended to substantially more general sampling models.…”
Section: Conclusion and Challengesmentioning
confidence: 95%
“…The application of 1 -minimization for computing sparse polynomial approximations of functions was first considered in [18,39,65,79,97]. This led to substantial amounts of subsequent research, including [3,4,13,14,19,26,26,43,44,52,55,57,63,74,76,78,81,86,90,91,94,95,[98][99][100][101][102]. Specific extensions to weighted and lower sparsity models were developed in [1-3, 5, 9, 25, 75, 80, 99].…”
Section: Related Literaturementioning
confidence: 99%
See 1 more Smart Citation
“…For m = 1 this hypothesis is confirmed numerically in Figure 4. For an application in the setting of weighted sparsity we refer to the recent work [36]. Note that this does not have to be the case in general.…”
Section: Dependence On the Seminormmentioning
confidence: 99%
“…Sparse approximation by possibly multivariate functions of data that includes gradient information is a highly investigated subject, from local piecewise spline interpolation to global compressive sparsification by ℓ 1 -norm optimization [1]. The Prony algorithm, which was originally designed for sums of exponentials [25] and used for sparse polynomials over finite fields to decode the 1959 BCH digital error correction code, is suitable for floating point data [6,7].…”
Section: Relation To Previous Workmentioning
confidence: 99%