Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021
DOI: 10.24963/ijcai.2021/631
|View full text |Cite
|
Sign up to set email alerts
|

Time Series Data Augmentation for Deep Learning: A Survey

Abstract: Deep learning performs remarkably well on many time series analysis tasks recently. The superior performance of deep neural networks relies heavily on a large number of training data to avoid overfitting. However, the labeled data of many real-world time series applications may be limited such as classification in medical time series and anomaly detection in AIOps. As an effective way to enhance the size and quality of the training data, data augmentation is crucial to the successful application of deep learni… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
141
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 359 publications
(142 citation statements)
references
References 26 publications
1
141
0
Order By: Relevance
“…Data augmentation is an efficient tool for increasing the size and enhancing the quality of the training data. It mainly aims to generate more data covering unseen input spaces (Wen et al, 2020). Data augmentation can make the model more robust by enlarging the size and adding noise or causing transformation.…”
Section: Data Augmentationmentioning
confidence: 99%
“…Data augmentation is an efficient tool for increasing the size and enhancing the quality of the training data. It mainly aims to generate more data covering unseen input spaces (Wen et al, 2020). Data augmentation can make the model more robust by enlarging the size and adding noise or causing transformation.…”
Section: Data Augmentationmentioning
confidence: 99%
“…It should be noted that this type of augmentation is only applicable to raw EEG data. Time Warping (TimeW): TimeW is an augmentation technique that randomly modifies the temporal position of a signal [31]. It simulates the variation of the temporal location of an event in a time window.…”
Section: ) Authentication By Classificationmentioning
confidence: 99%
“…If the train set contains information (e.g., same data points) from the test set, then it is likely that the model will present overfitting. Furthermore, to accurately train and test classification models, it is ideal that all classes have similar sizes, especially during the testing stages [62]. Having balanced classes allows us to easily compare classification metrics such as the confusion matrix, precision, recall, and f1-score.…”
Section: Training Validation and Test Setsmentioning
confidence: 99%