Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2023
DOI: 10.1145/3580305.3599444
|View full text |Cite
|
Sign up to set email alerts
|

Networked Time Series Imputation via Position-aware Graph Enhanced Variational Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 25 publications
0
0
0
Order By: Relevance
“…The loss function for RNN-VAEs is similar to traditional VAEs, consisting of a reconstruction loss (L rec ) and a regularization term (L reg ) to encourage a predefined distribution in the latent space [64], [65], [66], [67], [68].…”
Section: ) Recurrent Variational Autoencoders (Rnn-vae)mentioning
confidence: 99%
See 2 more Smart Citations
“…The loss function for RNN-VAEs is similar to traditional VAEs, consisting of a reconstruction loss (L rec ) and a regularization term (L reg ) to encourage a predefined distribution in the latent space [64], [65], [66], [67], [68].…”
Section: ) Recurrent Variational Autoencoders (Rnn-vae)mentioning
confidence: 99%
“…The loss function for conditional generative models depends on the specific architecture and conditions used but typically involves both the reconstruction loss and a term related to the conditions used for generation [69], [70], [64], [71], [72], [73], [74], [75], [76], [77] (Table II).…”
Section: ) Conditional Generative Models: Conditional Generative Mode...mentioning
confidence: 99%
See 1 more Smart Citation