2018
DOI: 10.48550/arxiv.1806.03852
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Data augmentation instead of explicit regularization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
35
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(35 citation statements)
references
References 0 publications
0
35
0
Order By: Relevance
“…Data Augmentation and Regularization Data augmentation expands the training data with examples generated via prior knowledge, which can be seen as an implicit regularization (Zhang et al, 2016;Hernández-García & König, 2018) where the prior is specified as virtual examples. Zhang et al (2018b) proposes mixup, which generate augmented samples via convex combinations of pairs of examples.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Data Augmentation and Regularization Data augmentation expands the training data with examples generated via prior knowledge, which can be seen as an implicit regularization (Zhang et al, 2016;Hernández-García & König, 2018) where the prior is specified as virtual examples. Zhang et al (2018b) proposes mixup, which generate augmented samples via convex combinations of pairs of examples.…”
Section: Related Workmentioning
confidence: 99%
“…On one hand, explicit regularization such as weight decay and dropout constrain the model capacity. On the other hand, implicit regularization such as data augmentation enlarge the support of the training distribution via prior knowledge (Hernández-García & König, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…We refer to these methodologies with the term data augmentation. Data augmentation approaches based on prior information (constraints) have been started to be explored in recent years, especially to cope with data sets of limited size and the related issue of poor generalization performance [12]. Data augmentation techniques have a strong history of success in the context of image-based learning tasks (e.g.…”
Section: Data Augmentationmentioning
confidence: 99%
“…Data augmentation has significantly improved the generalization of deep neural networks on a variety of image tasks, including image classification (Perez & Wang, 2017;De-Vries & Taylor, 2017), object detection (Zhong et al, 2017;Guo & Gould, 2015), and instance segmentation (Wang et al, 2018). Prior work has shown that data augmentation on its own can perform better than, or on par with, highly regularized models using other regularization techniques such as dropout (Hernández-García & König, 2018). This effectiveness is especially prominent in low data regimes, where models often fail to capture the full variance of the data in the training set (Zhang et al, 2019).…”
Section: Introductionmentioning
confidence: 99%