2006 IEEE International Conference on Acoustics Speed and Signal Processing Proceedings
DOI: 10.1109/icassp.2006.1660609
|View full text |Cite
|
Sign up to set email alerts
|

The Variational Bayes Approximation In Bayesian Filtering

Abstract: The Variational Bayes (VB) approach is used as a one-step approximation for Bayesian filtering. It requires the availability of moments of the free-form distributional optimizers. The latter may have intractable functional forms. In this contribution, we replace these by appropriate fixed-form distributions yielding the required moments. We address two scenarios of this Restricted VB (RVB) approximation. For the first scenario, an application in identification of HMMs is given. In the second, the fixed-form di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
34
0
1

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(36 citation statements)
references
References 6 publications
1
34
0
1
Order By: Relevance
“…However, it is the first time, to our knowledge, that this family of algorithms is applied to the study of RL in animals. We show that the two algorithms (RL and SF) share deep common features: for instance, the HAFVF and other similar algorithms ([57]) can be used with a naive prior θ 0 set to 0, in which case the update equations reduce somehow to a classical Q-learning algorithm ([1] and Appendix B). Another interesting bound between the two fields emerges when the measure of the environment volatility is built hierarchically: an interesting consequence of the forgetting algorithm we propose is that, when observations are not made, the agent erases progressively its memory of past events.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, it is the first time, to our knowledge, that this family of algorithms is applied to the study of RL in animals. We show that the two algorithms (RL and SF) share deep common features: for instance, the HAFVF and other similar algorithms ([57]) can be used with a naive prior θ 0 set to 0, in which case the update equations reduce somehow to a classical Q-learning algorithm ([1] and Appendix B). Another interesting bound between the two fields emerges when the measure of the environment volatility is built hierarchically: an interesting consequence of the forgetting algorithm we propose is that, when observations are not made, the agent erases progressively its memory of past events.…”
Section: Discussionmentioning
confidence: 99%
“…In this model (see Fig 2), named the Hierarchical Adaptive Forgetting Variational Filter [1], specific prior configurations will bend the learning process to categorize surprising events either as contingency changes, or as accidents. In contrast with other models [57], w and b are represented with a rich probability distribution where both the expected values and variances have an impact on the model’s behaviour. For a given prior belief on z , a confident prior over w , centered on high values of this parameter, will lead to a lack of flexibility that would not be observed with a less confident prior, even if they have the same expectation.…”
Section: Methodsmentioning
confidence: 99%
“…The EM algorithm can converge to a local, and not global, maximum [26, chap.5]. The VB algorithm is also sub-optimal because it is based on an approximation of the posterior distribution, and is dependent on the initialization [33].…”
Section: Performance Of the Proposed Algorithmsmentioning
confidence: 99%
“…However, each of these approaches results in only a single posterior distribution conditioned on data up to some pre-specified time period T n , and do not provide a mechanism for the approximation to be updated at a later time period T n+1 following the availability of additional observations. Smidl (2004) and Broderick et al (2013) each consider VB approximations for Bayesian updating, resulting in a progressive sequence of approximate posterior distributions that each condition on data up to any given time period T n . Their approaches update to the time T n+1 by substitution of the time T n posterior with MFVB approximations, which are feasibly obtained due to assuming the model and approximation each adhere to a suitably defined exponential family form.…”
Section: Introductionmentioning
confidence: 99%
“…In these special settings, MFVB is able to linearly combine the available optimally converged auxiliary parameters. While Smidl (2004) is concerned with state space models, Broderick et al (2013) considers application to a latent Dirichlet allocation problem, and shows it performs favourably compared to the approach of Hoffman et al (2010) in terms of log predictive score and computational time.…”
Section: Introductionmentioning
confidence: 99%