This work develops a model-aware autoencoder networks as a new method for solving scientific forward and inverse problems. Autoencoders are unsupervised neural networks that are able to learn new representations of data through appropriately selected architecture and regularization. The resulting mappings to and from the latent representation can be used to encode and decode the data. In our work, we set the data space to be the parameter space of a parameter of interest we wish to invert for. Further, as a way to encode the underlying physical model into the autoencoder, we enforce the latent space of an autoencoder to be the space of observations of physicallygoverned phenomena. In doing so, we leverage the well known capability of a deep neural network as a universal function approximator to simultaneously obtain both the parameter-to-observation and observation-to-parameter map. The results suggest that this simultaneous learning interacts synergistically to improve the inversion capability of the autoencoder.
This paper presents a regularization framework that aims to improve the fidelity of Tikhonov inverse solutions. At the heart of the framework is the data-informed regularization idea that only data-uninformed parameters need to be regularized, while the data-informed parameters, on which data and forward model are integrated, should remain untouched. We propose to employ the active subspace method to determine the data-informativeness of a parameter. The resulting framework is thus called a data-informed (DI) active subspace (DIAS) regularization. Four proposed DIAS variants are rigorously analyzed, shown to be robust with the regularization parameter and capable of avoiding polluting solution features informed by the data. They are thus well suited for problems with small or reasonably small noise corruptions in the data. Furthermore, the DIAS approaches can effectively reuse any Tikhonov regularization codes/libraries. Though they are readily applicable for nonlinear inverse problems, we focus on linear problems in this paper in order to gain insights into the framework. Various numerical results for linear inverse problems are presented to verify theoretical findings and to demonstrate advantages of the DIAS framework over the Tikhonov, truncated SVD, and the TSVD-based DI approaches.
This work unifies the analysis of various randomized methods for solving linear and nonlinear in-
verse problems by framing the problem in a stochastic optimization setting. By doing so, we show
that many randomized methods are variants of a sample average approximation. More importantly,
we are able to prove a single theoretical result that guarantees the asymptotic convergence for a
variety of randomized methods. Additionally, viewing randomized methods as a sample average ap-
proximation enables us to prove, for the first time, a single non-asymptotic error result that holds for
randomized methods under consideration. Another important consequence of our unified framework
is that it allows us to discover new randomization methods. We present various numerical results
for linear, nonlinear, algebraic, and PDE-constrained inverse problems that verify the theoretical
convergence results and provide a discussion on the apparently different convergence rates and the
behavior for various randomized methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.