2022
DOI: 10.1609/aaai.v36i7.20782
|View full text |Cite
|
Sign up to set email alerts
|

Conditional Loss and Deep Euler Scheme for Time Series Generation

Abstract: We introduce three new generative models for time series that are based on Euler discretization of Stochastic Differential Equations (SDEs) and Wasserstein metrics. Two of these methods rely on the adaptation of generative adversarial networks (GANs) to time series. The third algorithm, called Conditional Euler Generator (CEGEN), minimizes a dedicated distance between the transition probability distributions over all time steps. In the context of Itô processes, we provide theoretical guarantees that minimizing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 32 publications
0
15
0
Order By: Relevance
“…This measure compares the correlations found in multivariate time series of the real dataset to those of the synthetic dataset. Initially, Remlinger et al [50] described it as the "term-by-term mean squared error (MSE) between empirical correlation from reference samples on one side and from generated samples on the other side". However, this is very vague, as it remains unclear which correlation is used, how it is applied, and how the MSE are aggregated.…”
Section: Distribution-level Measuresmentioning
confidence: 99%
See 2 more Smart Citations
“…This measure compares the correlations found in multivariate time series of the real dataset to those of the synthetic dataset. Initially, Remlinger et al [50] described it as the "term-by-term mean squared error (MSE) between empirical correlation from reference samples on one side and from generated samples on the other side". However, this is very vague, as it remains unclear which correlation is used, how it is applied, and how the MSE are aggregated.…”
Section: Distribution-level Measuresmentioning
confidence: 99%
“…Marginal metrics. This is a combination of three classical statistics used to roughly compare the marginal distribution of each time step in the real dataset to its counterpart in the synthetic dataset [50]. Namely, these are the average, 95th percentile, and 5th percentile, which we refer to as s 1 , s 2 , s 3 , respectively, below.…”
Section: Length Histogrammentioning
confidence: 99%
See 1 more Smart Citation
“…Since there are various methods to calculate the distance between vectors, the following experiments are conducted to demonstrate the superiority of Manhattan distance. Besides Manhattan distance, Euler distance [55], Minkowski distance [56], Chebyshev distance [57], and cosine similarity [58] are also selected, and the experimental results are shown in Tables 3 and 4, respectively. From Tables 3 and 4, it can be seen that Manhattan distance achieves the optimal Accuracy, Precision, Recall, F1 and classification time on both baseline datasets.…”
Section: F Model Analysis 1) Importance Analysis Of Each Componentmentioning
confidence: 99%
“…Indeed, in order to capture the potentially complex dynamics of variables across time, it is not sufficient to learn the time marginals or even the joint distribution without exploiting the sequential structure. An increasing attention has been paid to these methods in the literature and state-of-the-art generative methods for time series are: Time series GAN [27] which combines an unsupervised adversarial loss on real/synthetic data and supervised loss for generating sequential data, Quant GAN [25] with an adversarial generator using temporal convolutional networks, Causal optimal transport COT-GAN [26] with adversarial generator using the adapted Wasserstein distance for processes, Conditional loss Euler generator [20] starting from a diffusion representation time series and minimizing the conditional distance between transition probabilities of real/synthetic samples, Signature embedding of time series [9], [18], [3], and Functional data analysis with neural SDEs [6].…”
Section: Introductionmentioning
confidence: 99%