2018
DOI: 10.1016/j.amc.2017.11.044
|View full text |Cite
|
Sign up to set email alerts
|

Sparse polynomial chaos expansion based on D-MORPH regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
19
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 45 publications
(19 citation statements)
references
References 41 publications
0
19
0
Order By: Relevance
“…The selection criterions for retaining the basis functions are based on the determination coefficient R 2 and least angle regression technique . Later, many other attempts also have been made to develop sparse PCE model in the field of UQ, the common idea hold in these methods is that the PCE coefficients are sparse (ie, having only several dominant coefficients). Given the training sample { X , Y }, where X = { x 1 , …, x N } T is the input data, Y = { Y 1 , …, Y N } T is the corresponding model response and N is the size of sample, the dominant PCE coefficients can be recovered by solving the following optimization problem boldωboldα=argminωα‖‖boldωboldα1subject0.25emtoboldΦωboldαYϵ, where ‖ ω α ‖ 1 is the l 1 norm of PCE coefficients, and ϵ is a tolerance parameter necessitated by the truncation error and Φ ( i , j ) = ψ j ( x i )( i = 1, …, N ; j = 1, …, P + 1) is the measure matrix.…”
Section: Polynomial Chaos Approximationmentioning
confidence: 99%
See 2 more Smart Citations
“…The selection criterions for retaining the basis functions are based on the determination coefficient R 2 and least angle regression technique . Later, many other attempts also have been made to develop sparse PCE model in the field of UQ, the common idea hold in these methods is that the PCE coefficients are sparse (ie, having only several dominant coefficients). Given the training sample { X , Y }, where X = { x 1 , …, x N } T is the input data, Y = { Y 1 , …, Y N } T is the corresponding model response and N is the size of sample, the dominant PCE coefficients can be recovered by solving the following optimization problem boldωboldα=argminωα‖‖boldωboldα1subject0.25emtoboldΦωboldαYϵ, where ‖ ω α ‖ 1 is the l 1 norm of PCE coefficients, and ϵ is a tolerance parameter necessitated by the truncation error and Φ ( i , j ) = ψ j ( x i )( i = 1, …, N ; j = 1, …, P + 1) is the measure matrix.…”
Section: Polynomial Chaos Approximationmentioning
confidence: 99%
“…Given the training sample { X , Y }, where X = { x 1 , …, x N } T is the input data, Y = { Y 1 , …, Y N } T is the corresponding model response and N is the size of sample, the dominant PCE coefficients can be recovered by solving the following optimization problem boldωboldα=argminωα‖‖boldωboldα1subject0.25emtoboldΦωboldαYϵ, where ‖ ω α ‖ 1 is the l 1 norm of PCE coefficients, and ϵ is a tolerance parameter necessitated by the truncation error and Φ ( i , j ) = ψ j ( x i )( i = 1, …, N ; j = 1, …, P + 1) is the measure matrix. To solve above optimization problem, a large number of powerful algorithms have been proposed, such as some adaptive methods and l 1 minimization methods. The adaptive methods aim at selecting the significant basis functions from full PCE sequentially using only few samples based on the well‐defined selection criterion, such as the correlation criterion in References and the variance contribution criterion in Reference .…”
Section: Polynomial Chaos Approximationmentioning
confidence: 99%
See 1 more Smart Citation
“…This algorithm is available from UQLab software introduced by Marelli and Sudret . On the sparsity assumption, many other techniques are established in the context of compressed sensing . PCE is more suitable for capturing the global trend.…”
Section: Introductionmentioning
confidence: 99%
“…15 On the sparsity assumption, many other techniques are established in the context of compressed sensing. [16][17][18][19] PCE is more suitable for capturing the global trend. However, PCE is often not adequate for capturing local accuracy in the close neighborhood of the sample points.…”
Section: Introductionmentioning
confidence: 99%