We consider the inverse sensitivity analysis problem of quantifying the uncertainty of inputs to a deterministic map given specified uncertainty in a linear functional of the output of the map. This is a version of the model calibration or parameter estimation problem for a deterministic map. We assume that the uncertainty in the quantity of interest is represented by a random variable with a given distribution, and we use the law of total probability to express the inverse problem for the corresponding probability measure on the input space. Assuming that the map from the input space to the quantity of interest is smooth, we solve the generally ill-posed inverse problem by using the implicit function theorem to derive a method for approximating the set-valued inverse that provides an approximate quotient space representation of the input space. We then derive an efficient computational approach to compute a measure theoretic approximation of the probability measure on the input space imparted by the approximate set-valued inverse that solves the inverse problem.
We consider inverse problems for a deterministic model in which the dimension of the output quantities of interest computed from the model is smaller than the dimension of the input quantities into the model. In this case, the inverse problem admits set-valued solutions (equivalence classes of solutions). We devise a method for approximating a representation of the set-valued solutions in the parameter domain. We then consider a stochastic version of the inverse problem in which a probability distribution on the output quantities is specified. We construct a measure theoretic formulation of the stochastic inverse problem, then develop the existence and structure of the solution using measure theory and the Disintegration Theorem. We also develop and analyze an approximate solution method for the stochastic inverse problem based on measure-theoretic techniques. We demonstrate the numerical implementation of the theory on a high-dimensional storm surge application where simulated noisy surge data from Hurricane Katrina is used to determine the spatially variable bathymetry fields of highest probability.
We develop computable a posteriori error estimates for the pointwise evaluation of linear functionals of a solution to a parameterized linear system of equations. These error estimates are based on a variational analysis applied to polynomial spectral methods for forward and adjoint problems. We also use this error estimate to define an improved linear functional and we prove that this improved functional converges at a much faster rate than the original linear functional given a pointwise convergence assumption on the forward and adjoint solutions. The advantage of this method is that we are able to use low order spectral representations for the forward and adjoint systems to cheaply produce linear functionals with the accuracy of a higher order spectral representation. The method presented in this paper also applies to the case where only the convergence of the spectral approximation to the adjoint solution is guaranteed. We present numerical examples showing that the error in this improved functional is often orders of magnitude smaller. We also demonstrate that in higher dimensions, the computational cost required to achieve a given accuracy is much lower using the improved linear functional. problems using Bayesian methods require accurate and efficient estimates of distributions or probabilities. For such problems, the moments of the spectral representation are useful only if the output distribution happens to have a particularly simple form, such as Gaussian. In [31,30], the computational efficiency of the inference problem was dramatically improved by sampling the spectral representation rather than the full model. While this approach is very appealing in terms of the computational cost, the reliability of the predictions relies on the pointwise accuracy of the spectral representation. This accuracy may be lacking for the low order spectral methods which are commonly used for high dimensional parameterized systems.Meanwhile, computational modeling is becoming increasingly reliant on a posteriori error estimates to provide a measure of reliability on the numerical predictions. This methodology has been developed for a variety of methods and is widely accepted in the analysis of discretization error for partial differential equations [4,15,21]. The adjoint-based (dual-weighted residual) method is motivated by the observation that often the goal of a simulation is to compute a small number of linear functionals of the solution, such as the average value in a region or the drag on an object, rather than controlling the error in a global norm. This method has been successfully extended to estimate numerical errors due to operator splittings [16], operator decomposition for multiscale/multiphysics applications [9,19,20], adaptive sampling algorithms [17,18], and inverse sensitivity analysis [5,8]. It was also used in [32] to estimate the error in moments of linear functionals for the stochastic Galerkin approximation of a partial differential equation. In [7], the present authors used adjoint-based analysis to ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.