Changepoint models are widely used to model the heterogeneity of sequential data.We present a novel sequential Monte Carlo (SMC) online Expectation-Maximization (EM) algorithm for estimating the static parameters of such models. The SMC online EM algorithm has a cost per time which is linear in the number of particles and could be particularly important when the data is representable as a long sequence of observations, since it drastically reduces the computational requirements for implementation. We present an asymptotic analysis for the stability of the SMC estimates used in the online EM algorithm and demonstrate the performance of this scheme using both simulated and real data originating from DNA analysis.
This paper proposes a new Bayesian tracking and parameter learning algorithm for non-linear and non-Gaussian multiple target tracking (MTT) models. A Markov chain Monte Carlo (MCMC) algorithm is designed to sample from the posterior distribution of the target states, birth and death times, and association of observations to targets, which constitutes the solution to the tracking problem, as well as the model parameters. The numerical section presents performance comparisons with several competing techniques and demonstrates significant performance improvements in all cases.learning, was proposed in [1]. This algorithm, referred to as MCMC-DA hereinafter, samples in a much smaller space than our proposed algorithm has to since the continuous valued target states can be integrated out analytically; amounts to sampling a probability mass function on a discrete space of data associations. However, this model reduction cannot be done for a general non-linear and non-Gaussian MTT model, so the sampling space has to be enlarged to include the continuous state values of the targets. Despite this, our new algorithm is efficient in that it approaches the performance of MCMC-DA for the linear Gaussian MTT model, which will be demonstrated in the numerical section.An MCMC algorithm for tracking in a non-linear and non-Gaussian MTT model, but excluding parameter learning, was also recently proposed by [2]. Their method follows the MCMC-DA technique of [1] closely. Although the likelihood of the non-linear and non-Gaussian MTT model is not analytically available when the continuous valued target states are integrated out, an unbiased estimate of it can be obtained using a particle filter. The Metropolis-Hastings (MH) algorithm can indeed be applied as long as the likelihood of the Bayesian posterior can be estimated in an unbiased fashion [3]. This property is exploited in [2] and their MCMC method for tracking is essentially MCMC-DA where the likelihood of the reduced model (i.e. continuous states integrated out) is replaced with an unbiased estimate given by the particle filter. This is essentially the particle marginal Metropolis-Hastings (PMMH) algorithm of [3] applied to the MTT problem. Thus, the algorithm in [2] is referred to as PMMH-MTT.Although appealing due to its straightforward implementation, PMMH-MTT can result in an inefficient sampler as we show when comparing with our method. This is because the likelihood estimate has a high variance and this will reduce the overall average acceptance probability of the algorithm. When static parameters are taken into account, which [2] did not do, the variance problem becomes far worse as many products that form the MTT likelihood would have to be simultaneously unbiasedly estimated for the acceptance probability of every proposed parameter change. An elegant solution to this problem is the particle Gibbs (PGibbs) algorithm of [3] for parameter learning in state-space models; we extend this technique to the MTT model.The above mentioned MTT algorithms based on MCMC, includi...
The Metropolis-Hastings algorithm allows one to sample asymptotically from any probability distribution π admitting a density with respect to a reference measure, also denoted π here, which can be evaluated pointwise up to a normalising constant. There has been recently much work devoted to the development of variants of the Metropolis-Hastings update which can handle scenarios where such an evaluation is impossible, and yet are guaranteed to sample from π asymptotically. The most popular approach to have emerged is arguably the pseudo-marginal Metropolis-Hastings algorithm which substitutes an unbiased estimate of an unnormalised version of π for π [Lin et al., 2000,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.