2014
DOI: 10.1137/140966575
|View full text |Cite
|
Sign up to set email alerts
|

A Measure-Theoretic Variational Bayesian Algorithm for Large Dimensional Problems

Abstract: Abstract. In this paper we provide an algorithm allowing to solve the variational Bayesian issue as a functional optimization problem. The main contribution of this paper is to transpose a classical iterative algorithm of optimization in the metric space of probability densities involved in the Bayesian methodology. The main advantage of this methodology is that it allows to address large dimensional inverse problems by unsupervised algorithms. The interest of our algorithm is enhanced by its application to la… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
40
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 20 publications
(40 citation statements)
references
References 36 publications
0
40
0
Order By: Relevance
“…As stated in [11], May 12, 2014 DRAFT there exists a problem equivalent to (5) in a separable probability measure space A = P i=1 A i , the Cartesian product of the A i , which is defined as follows:…”
Section: B Statement Of the Problemmentioning
confidence: 99%
See 4 more Smart Citations
“…As stated in [11], May 12, 2014 DRAFT there exists a problem equivalent to (5) in a separable probability measure space A = P i=1 A i , the Cartesian product of the A i , which is defined as follows:…”
Section: B Statement Of the Problemmentioning
confidence: 99%
“…In [11], the gradient descent method in Hilbert spaces has been transposed into the space of pdfs and, as a result, an exponentiated gradient based variational Bayesian approximation (EGrad-VBA) method whose convergence is proven was proposed to solve the involved functional optimization problem. For the aim of developing more efficient methods, we transpose here in the same context the subspace optimization method which has been shown to outperform standard optimization methods, such as gradient or conjugate gradient methods, in terms of rate of convergence in finite dimensional Hilbert spaces [17].…”
Section: B Statement Of the Problemmentioning
confidence: 99%
See 3 more Smart Citations