2021
DOI: 10.48550/arxiv.2112.13547
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

PRIME: A few primitives can boost robustness to common corruptions

Abstract: Despite their impressive performance on image classification tasks, deep networks have a hard time generalizing to many common corruptions of their data. To fix this vulnerability, prior works have mostly focused on increasing the complexity of their training pipelines, combining multiple methods, in the name of diversity. However, in this work, we take a step back and follow a principled approach to achieve robustness to common corruptions. We propose PRIME, a general data augmentation scheme that consists of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(11 citation statements)
references
References 14 publications
0
11
0
Order By: Relevance
“…We investigated the impact of data augmentation on classifying the filling level of a container, when the containers in the test-time images have different properties than those in the training set. We compared transfer learning -with or without adversarial training [13] -and a principled augmentation approach that explicitly operates on the geometry, color and spatial frequencies of the training images [19] in order to generalize the shape, color, and spectral content of an available, limited training dataset. We showed that the principled augmentation can either replace transfer learning approaches, which are computationally more expensive, or be combined with adversarial transfer learning to improve its performance.…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…We investigated the impact of data augmentation on classifying the filling level of a container, when the containers in the test-time images have different properties than those in the training set. We compared transfer learning -with or without adversarial training [13] -and a principled augmentation approach that explicitly operates on the geometry, color and spatial frequencies of the training images [19] in order to generalize the shape, color, and spectral content of an available, limited training dataset. We showed that the principled augmentation can either replace transfer learning approaches, which are computationally more expensive, or be combined with adversarial transfer learning to improve its performance.…”
Section: Discussionmentioning
confidence: 99%
“…The performance on the validation set of C-CMM on the "shifted" containers is systematically lower than that on containers that share similarities with those in the training set (overfitting) [13]. To address this limitation, we consider a data augmentation scheme that generates diverse augmentations using a set of primitive max-entropy transformations on the spatial τ , color, γ, and spectral, ω, domain [19]. We expect these transformations to relate to the changes we want to introduce during training: container shape through τ , container color through γ, and illumination and texture through ω.…”
Section: B Principled Data Augmentationmentioning
confidence: 99%
See 3 more Smart Citations