2008
DOI: 10.1109/tsp.2008.925969
|View full text |Cite
|
Sign up to set email alerts
|

Particle Filtering for Large-Dimensional State Spaces With Multimodal Observation Likelihoods

Abstract: Abstract-We study efficient importance sampling techniques for particle filtering (PF) when either (a) the observation likelihood (OL) is frequently multimodal or heavy-tailed, or (b) the state space dimension is large or both. When the OL is multimodal, but the state transition pdf (STP) is narrow enough, the optimal importance density is usually unimodal. Under this assumption, many techniques have been proposed. But when the STP is broad, this assumption does not hold. We study how existing techniques can b… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
56
0

Year Published

2008
2008
2022
2022

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 46 publications
(56 citation statements)
references
References 37 publications
0
56
0
Order By: Relevance
“…The state transition prior corresponding to the above state models can be written as: [23,22] and PF-MT [24]. PF-MT [24] splits the state vector Xt into Xt = [X t,s , X t,r ] where Xt,s denotes the coefficients of a small dimensional state vector, which can change significantly over time, while Xt,r refers to the rest of the states (large dimensional) which usually change much more slowly over time.…”
Section: State Transition Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…The state transition prior corresponding to the above state models can be written as: [23,22] and PF-MT [24]. PF-MT [24] splits the state vector Xt into Xt = [X t,s , X t,r ] where Xt,s denotes the coefficients of a small dimensional state vector, which can change significantly over time, while Xt,r refers to the rest of the states (large dimensional) which usually change much more slowly over time.…”
Section: State Transition Modelsmentioning
confidence: 99%
“…PF-MT [24] splits the state vector Xt into Xt = [X t,s , X t,r ] where Xt,s denotes the coefficients of a small dimensional state vector, which can change significantly over time, while Xt,r refers to the rest of the states (large dimensional) which usually change much more slowly over time. PF-MT importance samples only on Xt,s, while replacing importance sampling by deterministic posterior Mode Tracking (MT) for Xt,r and thus significantly reducing the importance sampling dimension.…”
Section: State Transition Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Although the MEE criteria can minimize both the probabilistic uncertainty and dispersion of the estimation error, it cannot guarantee the minimum error. In this work , an improved performance index J is considered: (13) where 2 …”
Section: Performance Indexmentioning
confidence: 99%
“…Although the filtering problem can be tackled using numerical integration [8,9], it is difficult to implement when the dimension of the state vector is higher. Sequential Monte Carlo simulation [10][11][12][13][14][15], which is also named particle filtering strategy, has shown its great advantages to deal with filtering problems for nonlinear non-Gaussian systems. Nevertheless, there are still many issues to be solved, for example, (1) random sampling may bring the accumulation of Monte Carlo error and even lead to filter divergence; (2) a large number of particles are needed to avoid degradation and to improve the estimation accuracy, which makes the calculation a sharp increase.…”
Section: Introductionmentioning
confidence: 99%