2017
DOI: 10.1109/tsp.2017.2733504
|View full text |Cite
|
Sign up to set email alerts
|

Approximate Smoothing and Parameter Estimation in High-Dimensional State-Space Models

Abstract: We present approximate algorithms for performing smoothing in a class of high-dimensional state-space models via sequential Monte Carlo methods ('particle filters'). In high dimensions, a prohibitively large number of Monte Carlo samples ('particles'), growing exponentially in the dimension of the state space, is usually required to obtain a useful smoother. Employing blocking approximations, we exploit the spatial ergodicity properties of the model to circumvent this curse of dimensionality. We thus obtain ap… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 28 publications
0
9
0
Order By: Relevance
“…A detailed theoretical analysis of such a scheme has been proposed in [130] where it was shown rigourously that it can overcome the curse of dimensionality. These methods provide asymptotically biased state and parameter estimates, the bias being controlled under suitable regularity assumptions, or consistent estimates whose mean square errors go to zero at a slower rate than the usual 1/N Monte Carlo rate [130,131]. The main idea behind these techniques is to ignore long-range dependencies when performing Bayes updates in a filtering procedure, an idea borrowed from the ensemble Kalman filter literature where it is referred to as localization [4].…”
Section: Forward Uq With Multi-level Monte Carlomentioning
confidence: 99%
See 1 more Smart Citation
“…A detailed theoretical analysis of such a scheme has been proposed in [130] where it was shown rigourously that it can overcome the curse of dimensionality. These methods provide asymptotically biased state and parameter estimates, the bias being controlled under suitable regularity assumptions, or consistent estimates whose mean square errors go to zero at a slower rate than the usual 1/N Monte Carlo rate [130,131]. The main idea behind these techniques is to ignore long-range dependencies when performing Bayes updates in a filtering procedure, an idea borrowed from the ensemble Kalman filter literature where it is referred to as localization [4].…”
Section: Forward Uq With Multi-level Monte Carlomentioning
confidence: 99%
“…Second, the smoothing and parameter estimation procedures developed in [131] cannot be applied when only forward simulation of (X t ) t≥1 is feasible. Third, while consistent estimates can be obtained by scaling the size of the blocks with N , the resulting rate of convergence is low and new efficient approaches are…”
Section: Forward Uq With Multi-level Monte Carlomentioning
confidence: 99%
“…( 2010 ) have equivalent computational costs. In Finke and Singh ( 2017 ), an approximate localization scheme is proposed for the forward–backward algorithm, including theoretical results that guarantees bounds on the asymptotic variance and bias under models that are sufficiently local. In the landmark paper of Andrieu et al.…”
Section: Introductionmentioning
confidence: 99%
“…Similar algorithms like the general two-filter smoother of Briers et al [2010] have equivalent computational costs. In Finke and Singh [2017], an approximate localization scheme is proposed for the forward-backward algorithm, including theoretical results that guarantees bounds on the asymptotic variance and bias under models that are sufficiently local. In the landmark paper of Andrieu et al [2010], the authors introduced particle Markov Chain Monte Carlo, which combines particle filters in conjunction with either Metropolis-Hastings or Gibbs algorithms.…”
Section: Introduction 1backgroundmentioning
confidence: 99%