2021
DOI: 10.48550/arxiv.2107.00488
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Differentiable Particle Filters through Conditional Normalizing Flow

Abstract: Differentiable particle filters provide a flexible mechanism to adaptively train dynamic and measurement models by learning from observed data. However, most existing differentiable particle filters are within the bootstrap particle filtering framework and fail to incorporate the information from latest observations to construct better proposals. In this paper, we utilize conditional normalizing flows to construct proposal distributions for differentiable particle filters, enriching the distribution families t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…A method called stop-gradient resampling is outlined in [24] which only modifies the messages used in backpropagation, is implemented in PyTorch and makes use of automatic differentiation libraries. In [25], they utilise conditional normalisation flows to construct flexible probability distributions for differentiable particle filters. A comparison of differentiable filters can be seen in [26].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A method called stop-gradient resampling is outlined in [24] which only modifies the messages used in backpropagation, is implemented in PyTorch and makes use of automatic differentiation libraries. In [25], they utilise conditional normalisation flows to construct flexible probability distributions for differentiable particle filters. A comparison of differentiable filters can be seen in [26].…”
Section: Introductionmentioning
confidence: 99%
“…Evaluate the final log likelihood, log p(y 1:T |θ), and associated derivative, d dθ log p(y 1:T |θ), using (16) and(25), respectively.…”
mentioning
confidence: 99%