ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022
DOI: 10.1109/icassp43922.2022.9747290
|View full text |Cite
|
Sign up to set email alerts
|

Unrolling Particles: Unsupervised Learning of Sampling Distributions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 65 publications
0
4
0
Order By: Relevance
“…Different ways to learn proposal distributions in differentiable particle filters have been proposed [50,95,96]. In [95,96], the proposal distribution is constructed as a Gaussian distribution whose mean and covariance matrix are given by a neural network with the estimate of the previous latent state and the current observation as inputs.…”
Section: Proposal Distributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Different ways to learn proposal distributions in differentiable particle filters have been proposed [50,95,96]. In [95,96], the proposal distribution is constructed as a Gaussian distribution whose mean and covariance matrix are given by a neural network with the estimate of the previous latent state and the current observation as inputs.…”
Section: Proposal Distributionsmentioning
confidence: 99%
“…Different ways to learn proposal distributions in differentiable particle filters have been proposed [50,95,96]. In [95,96], the proposal distribution is constructed as a Gaussian distribution whose mean and covariance matrix are given by a neural network with the estimate of the previous latent state and the current observation as inputs. To provide a mechanism for constructing proposal distributions more flexible than Gaussian distributions, it was proposed in [50] to construct proposal distributions for differentiable particle filters with conditional normalising flows [97].…”
Section: Proposal Distributionsmentioning
confidence: 99%
“…Algorithm unfolding decouples the update steps of an iterative algorithm to create a cascade of hybrid layers that preserve the original update structure but introduces one or more learnable parameters from data. This form of domain-inspired learning has been extremely popular and effective in several application areas, including but not limited to non-negative matrix factorization [29], iterative soft thresholding [30], semantic image segmentation [31], blind deblurring [32], clutter suppression [33], particle filtering [34], symbol detection [35], link scheduling [36], energy-aware power allocation [37], and beamforming in wireless networks [21]- [23]. These algorithms use various neural layers to learn one or more parameters of the iterative algorithm being unfolded or to approximate certain computational steps in the algorithm to reduce complexity and speed-up processing.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, it is also necessary to mention that before prediction, the ANN should firstly be trained [38]. The training could be with or without the supervisor and classified into supervised [39], semi-supervised [40] and unsupervised networks [41]. It depends on the type of the network and training dataset [42].…”
Section: Introductionmentioning
confidence: 99%