2019
DOI: 10.1016/j.jcp.2018.12.029
|View full text |Cite
|
Sign up to set email alerts
|

Entropy-based closure for probabilistic learning on manifolds

Abstract: This paper presents mathematical results in support of the methodology of the probabilistic learning on manifolds (PLoM) recently introduced by the authors, which has been used with success for analyzing complex engineering systems. The PLoM considers a given initial dataset constituted of a small number of points given in an Euclidean space, which are interpreted as independent realizations of a vector-valued random variable for which its non-Gaussian probability measure is unknown but is, a priori, concentra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 25 publications
(23 citation statements)
references
References 52 publications
(109 reference statements)
0
23
0
Order By: Relevance
“…The construction introduces two hyperparameters: the dimension m ≤ N and the smoothing parameter ε diff >0. An algorithm is proposed in the work of Soize et al for estimating their values. Most of the time, m and ε diff can be chosen as follows.…”
Section: Computational Statistical Methods For Generating Realizationsmentioning
confidence: 99%
See 3 more Smart Citations
“…The construction introduces two hyperparameters: the dimension m ≤ N and the smoothing parameter ε diff >0. An algorithm is proposed in the work of Soize et al for estimating their values. Most of the time, m and ε diff can be chosen as follows.…”
Section: Computational Statistical Methods For Generating Realizationsmentioning
confidence: 99%
“…If function truem^ is a decreasing function of ε diff in the broad sense (if not, see the work of Soize et al), then the optimal value εdiffopt of ε diff can be chosen as the smallest value of the integer truem^false(εdiffoptfalse) such that {]truem^()εdiffopt<truem^false(εdifffalse)0.1em,εdiff0.1em0,εdiffoptfalse[0.1emfalse}0.1em0.1em{]truem^()εdiffopt=truem^false(εdifffalse)0.1em,εdiff0.1emεdiffopt,1.50.1emεdiffoptfalse[0.1emfalse}0.1em. …”
Section: Computational Statistical Methods For Generating Realizationsmentioning
confidence: 99%
See 2 more Smart Citations
“…Clearly, these eigenvalues and hence the optimal value of m depend on , the width of the diffusion kernel. A maximum entropy argument [29] is used to simultaneously select the values of and m.…”
Section: Diffusion Mapsmentioning
confidence: 99%