2022
DOI: 10.48550/arxiv.2203.09978
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

WOODS: Benchmarks for Out-of-Distribution Generalization in Time Series Tasks

Abstract: Machine learning models often fail to generalize well under distributional shifts. Understanding and overcoming these failures have led to a research field of Out-of-Distribution (OOD) generalization. Despite being extensively studied for static computer vision tasks, OOD generalization has been underexplored for time series tasks. To shine light on this gap, we present WOODS: eight challenging open-source time series benchmarks covering a diverse range of data modalities, such as videos, brain recordings, and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 55 publications
(67 reference statements)
0
6
0
Order By: Relevance
“…Adam [ 55 ] was adopted as the optimizer used for the training process. To reduce bias [ 16 ], results were averaged over nine combinations of three different batch sizes (64, 128, and 256) and three learning rates (0.0008, 0.001, and 0.003). To account for class imbalance, the percentage of instances per class in the training set was given to the cross-entropy loss function as class weights.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Adam [ 55 ] was adopted as the optimizer used for the training process. To reduce bias [ 16 ], results were averaged over nine combinations of three different batch sizes (64, 128, and 256) and three learning rates (0.0008, 0.001, and 0.003). To account for class imbalance, the percentage of instances per class in the training set was given to the cross-entropy loss function as class weights.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Gagnon et al. [ 16 ] included a HAR dataset in a benchmark to compare domain generalization methods applied to deep neural networks. The results indicate a 9.07% drop in accuracy from 93.35% ID to 84.28% OOD on a dataset where different devices worn in different positions characterize the possible domains.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Some of the aforementioned regularization methods have been investigated as a potential solution to the OOD generalization problem in HAR. Gagnon et al [ 35 ] included a HAR dataset in their Domain Generalization benchmark. Their results indicate a 9.07% drop in accuracy from 93.35% In-Distribution (ID) to 84.28% OOD on a dataset where different devices worn in different positions characterize the possible domains.…”
Section: Related Workmentioning
confidence: 99%
“…To address the issue, most previous DG methods focus on extracting domain-invariant features across several stationary source domains (Sun and Saenko 2016;Li et al 2018;Sagawa et al 2020;Arjovsky et al 2019). Nevertheless, domains could also be non-stationary and evolve along with certain structures (Wang et al 2022;Qin, Wang, and Li 2022;Yao et al 2022a;Gagnon-Audet et al 2022). For instance, banks assess whether a person is likely to default on a loan by examining factors such as income, career, and marital status.…”
Section: Introductionmentioning
confidence: 99%