Inverse problems deal with the quest for unknown causes of observed consequences, based on predictive models, known as the forward models, that associate the former quantities to the latter in the causal order. Forward models are usually well‐posed, as causes determine consequences in a unique and stable way. Inverse problems, on the other hand, are usually ill‐posed: the data may be insufficient to identify the cause unambiguously, an exact solution may not exist, and, like in a mystery story, discovering the cause without extra information tends to be highly sensitive to measurement noise and modeling errors. The Bayesian methodology provides a versatile and natural way of incorporating extra information to supplement the noisy data by modeling the unknown as a random variable to highlight the uncertainty about its value. Presenting the solution in the form of a posterior distribution provides a wide range of possibilities to compute useful estimates. Inverse problems are traditionally approached from the point of view of regularization, a process whereby the ill‐posed problem is replaced by a nearby well‐posed one. While many of the regularization techniques can be reinterpreted in the Bayesian framework through prior design, the Bayesian formalism provides new techniques to enrich the paradigm of traditional inverse problems. In particular, inaccuracies and inadequacies of the forward model are naturally handled in the statistical framework. Similarly, qualitative information about the solution may be reformulated in the form of priors with unknown parameters that can be successfully handled in the hierarchical Bayesian context.
This article is categorized under:
Statistical and Graphical Methods of Data Analysis > Bayesian Methods and Theory
Algorithms and Computational Methods > Numerical Methods
Applications of Computational Statistics > Computational Mathematics