2016
DOI: 10.1615/int.j.uncertaintyquantification.2016015915
|View full text |Cite
|
Sign up to set email alerts
|

ROBUST UNCERTAINTY QUANTIFICATION USING PRECONDITIONED LEAST-SQUARES POLYNOMIAL APPROXIMATIONS WITH l1-REGULARIZATION

Abstract: We propose a non-iterative robust numerical method for the non-intrusive uncertainty quantification of multivariate stochastic problems with reasonably compressible polynomial representations. The approximation is robust to data outliers or noisy evaluations which do not fall under the regularity assumption of a stochastic truncation error but pertains to a more complete error model, capable of handling interpretations of physical/computational model (or measurement) errors. The method relies on the cross-vali… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 38 publications
0
7
0
Order By: Relevance
“…There exists different ways of deriving sample weights from the discrepancy measure, but the main idea is to make it inversely proportional with some numerical bounding capability, i.e. ρ j ∼ max( , ∆ j ) −1 [23]. If all ρ j are large, it means that the surrogate is quite accurate in the sampled approximate posterior region, implying in turn thatπ…”
Section: Adaptive Weighted Regression Constructionmentioning
confidence: 99%
“…There exists different ways of deriving sample weights from the discrepancy measure, but the main idea is to make it inversely proportional with some numerical bounding capability, i.e. ρ j ∼ max( , ∆ j ) −1 [23]. If all ρ j are large, it means that the surrogate is quite accurate in the sampled approximate posterior region, implying in turn thatπ…”
Section: Adaptive Weighted Regression Constructionmentioning
confidence: 99%
“…We have seen up until now the adaptive strategies and algorithms when solving the optimisation problems (28) or (33). However, as part of the motivation and novelty of this paper we wish to: (a) quantify the deterministic error at each sample of the parameters space and compute a local (sample-wise) optimization deterministic problem with a required complexity C x and to (b) be able to choose which error, stochastic or deterministic, dominates a computation and thus to solve the corresponding problem.…”
Section: Generate Meshmentioning
confidence: 99%
“…compressible airflow around an aircraft). Putting aside modeling error, sometimes leading to unpredictable outlying fluctuations [28], a key element of the computational pipeline is the mesh; it discretizes the geometry, acts as the support of the numerical method and needs to be tailored to the underlying non-linear equations governing the system physics. A poor mesh will fail to capture both geometrical and physical complexities and will thus yield inaccurate results [31].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The least absolute shrinkage and selection operator (LASSO) algorithm [45] is in this case an attractive modification of the ordinary least-square formulation that constrains the sum of the absolute regression coefficients. Weighted versions exist that make the approximation even more robust [46]. Another related model selection algorithm, the least-angle regression (LAR), is also very efficient in our framework.…”
Section: Computation Of the Expansion Coefficientsmentioning
confidence: 99%