“…The former strategy includes the development of accelerated Markov chain samplers, Hamiltonian Monte Carlo sampling, iterative local updating ensemble smoother, ensemble Kalman filters, and learning on statistical manifolds (Barajas-Solano et al, 2019;Boso & Tartakovsky, 2020bKang et al, 2021;Zhou & Tartakovsky, 2021). The latter strategy aims to replace an expensive forward model with its cheap surrogate/emulator/reduced-order model (Ciriello et al, 2019;Lu & Tartakovsky, 2020a. Among these techniques, various flavors of deep neural networks (DNNs) have attracted attention, in part, because they remain robust for large numbers of inputs and outputs (Zhou & Tartakovsky, 2021;Mo et al, 2020;Kang et al, 2021).…”