2023
DOI: 10.1016/j.cma.2022.115811
|View full text |Cite
|
Sign up to set email alerts
|

Multi-fidelity surrogate modeling using long short-term memory networks

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
13
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 36 publications
(13 citation statements)
references
References 44 publications
0
13
0
Order By: Relevance
“…Indeed FF networks lack the capability to retain memory across time steps, and thus they are unable to preserve the temporal dependency within the data. We refer to [62] for further discussions on these aspects.…”
Section: (C) the Need Of Lstm Networkmentioning
confidence: 99%
See 3 more Smart Citations
“…Indeed FF networks lack the capability to retain memory across time steps, and thus they are unable to preserve the temporal dependency within the data. We refer to [62] for further discussions on these aspects.…”
Section: (C) the Need Of Lstm Networkmentioning
confidence: 99%
“…LSTM models have been shown to be effective in time series analysis, e.g. in speech recognition [58,59], natural language processing [60] and computational mechanics [61,62]. In particular, LSTM networks have proven to be successful in the detection of both long-and short-term temporal patterns and nonlinear correlations between datasets, with potential relevance in the construction of non-intrusive reduced order models [55,[63][64][65][66].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Yan and Zhou [29] built an adaptive surrogate model based on the MF approach in conjunction with NN to solve the Bayesian inverse problem. The NN was used to bridge the low-and high-fidelity surrogate model in the framework of Markov chain Monte Carlo Conti et al [30] established the MF long shortterm memory networks to solve the time-dependent problems. Motamed [31] constructed two separate neural networks based on bi-fidelity training data.…”
Section: Introductionmentioning
confidence: 99%