2019 International Conference on Computational Science and Computational Intelligence (CSCI) 2019
DOI: 10.1109/csci49370.2019.00046
|View full text |Cite
|
Sign up to set email alerts
|

Mitigating Drift in Time Series Data with Noise Augmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(27 citation statements)
references
References 3 publications
0
27
0
Order By: Relevance
“…It is able to do this by effectively creating new patterns with the assumption that the unseen test patterns are only different from the training patterns by a factor of noise. In addition, jittering has been shown to help mitigate time series drift for various neural network models [28]. Time series drift when the data distribution changes due to the introduction of new data.…”
Section: Magnitude Domain Transformationsmentioning
confidence: 99%
See 2 more Smart Citations
“…It is able to do this by effectively creating new patterns with the assumption that the unseen test patterns are only different from the training patterns by a factor of noise. In addition, jittering has been shown to help mitigate time series drift for various neural network models [28]. Time series drift when the data distribution changes due to the introduction of new data.…”
Section: Magnitude Domain Transformationsmentioning
confidence: 99%
“…Similar to data augmentation for images, most data augmentation techniques for time series are based on random transformations of the training data. For example, adding random noise [28], slicing or cropping [29], scaling [30], random warping in the time dimension [28,30], and frequency warping [31]. Examples The problem with random transformation-based data augmentation is that there is a diverse amount of time series with each having different properties, and not every transformation is applicable to every dataset.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, phenological events and the underlying trend of the VI curves of each of the crop are accurately modeled using computationally optimal transformations [64]. Unlike the existing transform-based approaches, the proposed approach does not alter the wavelet coefficients that characterize a given crop class [35], [45].…”
Section: B Improvement In Classification Resultsmentioning
confidence: 99%
“…The conventional approaches, such as geometric transformations, kernel filters (smoothing or enhancement), image mixing (interchanging slices), and random erasing (removing random slices), are suited only for image-related tasks [35], [40], [42]. For time-series data, augmentations such as jittering (random noise addition) [45], slicing (cropping) [46], magnitude warping (smooth element-wise magnitude change) [18], [38], permutation (rearranging slices) [45], [47], rotation (flipping for univariate; rotation for multivariate) [35], scaling (pattern-wise magnitude change) [47], random warping in the time dimension (time step deformation) [45], [47], and frequency warping (frequency deformation) [48] are adopted. The slicing-based augmentations responded positively with more extended time series, but pattern mixing methods are negatively correlated to the time series length.…”
mentioning
confidence: 99%