2020
DOI: 10.48550/arxiv.2010.08729
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Ensemble Kalman Variational Objectives: Nonlinear Latent Trajectory Inference with A Hybrid of Variational Inference and Ensemble Kalman Filter

Abstract: Variational Inference (VI) combined with Bayesian nonlinear filtering produces the stateof-the-art results for latent trajectory inference. A body of recent works focused on Sequential Monte Carlo (SMC) and its expansion, e.g., Forward Filtering Backward Simulation (FFBSi). These studies achieved a great success, however, remain a serious problem for particle degeneracy. In this paper, we propose Ensemble Kalman Objectives (EnKOs), the hybrid method of VI and Ensemble Kalman Filter (EnKF), to infer the State S… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 42 publications
(45 reference statements)
0
1
0
Order By: Relevance
“…Essentially, an end-to-end compression architecture aims to map an audio signal into a more concise portable representation. This characteristic is shared with deep generative models, targeting capturing critical aspects of data to generate new data instances (Ishizone et al [2020], Nistal et al [2021], Dhariwal et al [2020], Dieleman et al [2018]). A notable example of this approach consists of a combination of Variational Autoencoders (VAEs) and Recurrent Neural Networks (RNNs) (Hahn et al [2018]).…”
Section: Introductionmentioning
confidence: 99%
“…Essentially, an end-to-end compression architecture aims to map an audio signal into a more concise portable representation. This characteristic is shared with deep generative models, targeting capturing critical aspects of data to generate new data instances (Ishizone et al [2020], Nistal et al [2021], Dhariwal et al [2020], Dieleman et al [2018]). A notable example of this approach consists of a combination of Variational Autoencoders (VAEs) and Recurrent Neural Networks (RNNs) (Hahn et al [2018]).…”
Section: Introductionmentioning
confidence: 99%