2009
DOI: 10.3182/20090706-3-fr-2004.00129
|View full text |Cite
|
Sign up to set email alerts
|

An Overview of Sequential Monte Carlo Methods for Parameter Estimation in General State-Space Models

Abstract: Nonlinear non-Gaussian state-space models arise in numerous applications in control and signal processing. Sequential Monte Carlo (SMC) methods, also known as Particle Filters, provide very good numerical approximations to the associated optimal state estimation problems. However, in many scenarios, the state-space model of interest also depends on unknown static parameters that need to be estimated from the data. In this context, standard SMC methods fail and it is necessary to rely on more sophisticated algo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
207
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 203 publications
(207 citation statements)
references
References 37 publications
0
207
0
Order By: Relevance
“…A more challenging and realistic setting would be one where the full model is unknown, and one has to use noisy observations to infer a parametrization (Li et al 2009;Berry and Harlim 2014;Harlim 2016). This is the challenging topic of parameter estimation for hidden Markov and non-Markov models (Kantas et al 2009). We leave it to future work.…”
Section: Summary and Discussionmentioning
confidence: 99%
“…A more challenging and realistic setting would be one where the full model is unknown, and one has to use noisy observations to infer a parametrization (Li et al 2009;Berry and Harlim 2014;Harlim 2016). This is the challenging topic of parameter estimation for hidden Markov and non-Markov models (Kantas et al 2009). We leave it to future work.…”
Section: Summary and Discussionmentioning
confidence: 99%
“…There are many methods to solve this problem (see [23] for a recent review) and here we concentrate on the Maximum Likelihood method. Given a finite observation history (y n ) 0≤n≤T , the model parameter that best describes the data can be taken to be the maximizer of the density function given below θ ∈ Θ −→ p θ (y 0 , .…”
Section: Examplementioning
confidence: 99%
“…Recently, a renewed interest in the use of particle filters for computing marginal likelihood (integrating over state variables) for the purpose of parameter estimation has emerged (FernandezVillaverde and Rubio-Ramirez, 2007;Andrieu et al, 2010;Kantas et al, 2009;Malik and Pitt, 2011;DeJong et al, 2013). This is also the context of the present paper.…”
Section: Introductionmentioning
confidence: 90%
“…Statistical standard errors are approximated using the (finite difference) observed information matrix at the optimizer. We prefer this approach over methods based on accumulating the score vector (and possibly on-line optimization) at each time step (Kantas et al, 2009;Del Moral et al, 2011;Poyiadjis et al, 2011) as it is easier to program and adapt to new models.…”
Section: Illustrationsmentioning
confidence: 99%