2015
DOI: 10.1007/s10044-015-0496-9
|View full text |Cite
|
Sign up to set email alerts
|

Approximate variational inference based on a finite sample of Gaussian latent variables

Abstract: Variational methods are employed in situations where exact Bayesian inference becomes intractable due to the difficulty in performing certain integrals. Typically, variational methods postulate a tractable posterior and formulate a lower bound on the desired integral to be approximated, e.g. marginal likelihood. The lower bound is then optimised with respect to its free parameters, the socalled variational parameters. However, this is not always possible as for certain integrals it is very challenging (or tedi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
1
1
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…This approach makes use of the same model augmentations utilised in this work. The variational posterior of the triggering parameters could be inferred, e.g., via blackbox variational inference (Ranganath et al 2014;Gianniotis et al 2015). Alternatively, one could restrict the calculations to finding the MAP estimate of the GP-ETAS model.…”
Section: Case Study: L'aquila Italymentioning
confidence: 99%
“…This approach makes use of the same model augmentations utilised in this work. The variational posterior of the triggering parameters could be inferred, e.g., via blackbox variational inference (Ranganath et al 2014;Gianniotis et al 2015). Alternatively, one could restrict the calculations to finding the MAP estimate of the GP-ETAS model.…”
Section: Case Study: L'aquila Italymentioning
confidence: 99%
“…The free parameters in (10) are µ, r and θ. Note the simplification in the entropy term due to the orthogonal…”
Section: Mean and Scaling Of Covariance -Mvi Eigmentioning
confidence: 99%
“…Following [6,10] we draw S number of samples z s ∼ N (0, I D ) which we keep fixed throughout the optimisation of the objectives in ( 8), ( 10) and (11). This enables the use of scaledconjugate gradients (SCG) as the optimisation routine [16] in contrast to the typically employed stochastic gradient descent 4 .…”
Section: Optimisationmentioning
confidence: 99%
See 1 more Smart Citation