2021
DOI: 10.48550/arxiv.2106.04170
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Scalable conditional deep inverse Rosenblatt transports using tensor-trains and gradient-based dimension reduction

Abstract: We present a novel offline-online method to mitigate the computational burden of the characterization of conditional beliefs in statistical learning. In the offline phase, the proposed method learns the joint law of the belief random variables and the observational random variables in the tensor-train (TT) format. In the online phase, it utilizes the resulting order-preserving conditional transport map to issue real-time characterization of the conditional beliefs given new observed information. Compared with … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 30 publications
(44 reference statements)
0
5
0
Order By: Relevance
“…The computational cost is invariant to variable ordering. This is more flexible than the squared-tensor-train methods of [14,15], where marginalizing variables in arbitrary order can significantly increase the computational complexity.…”
Section: Approximating the Target Densitymentioning
confidence: 99%
See 3 more Smart Citations
“…The computational cost is invariant to variable ordering. This is more flexible than the squared-tensor-train methods of [14,15], where marginalizing variables in arbitrary order can significantly increase the computational complexity.…”
Section: Approximating the Target Densitymentioning
confidence: 99%
“…Instead of directly operating with the high-dimensional discretized parameter θ, we apply the data-free dimension reduction method of [15,16] to compress the parameter dimensionality. We first construct a sensitivity matrix H ∈ R d θ ×d θ by integrating the Fisher information matrix over the prior distribution, and then compute the eigenpairs (ϕ i , ν i )-in descending order of eigenvalues ν i -of the matrix pencil (H, C −1 ).…”
Section: Elliptic Pdementioning
confidence: 99%
See 2 more Smart Citations
“…is typical for Markov chain Monte Carlo methods. With this in mind, functional representations in a polynomial basis exploiting the beneficial structure of the Knothe-Rosenblatt transform were for instance developed in [74,21,20]. For the formulation of the variational problem, the Kullback-Leibler divergence is used.…”
mentioning
confidence: 99%