2022
DOI: 10.48550/arxiv.2202.01263
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NoisyMix: Boosting Model Robustness to Common Corruptions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…A S a strong augmentation technique in supervised learning, Mixup has empirically and theoretically been proved to boost the performance of neural networks with its regularization power [2], [3], [4]. Despite its reliable performance, Mixup is also reported to strengthen deep models with better calibration [5], robustness [6], [7] and generalization [6], thus being widely used in adversarial training [4], domain adaptation [8], imbalance problems [9] and so on. However, as Mixup-style training depends heavily on data properties [10], on certain cases, chances are that traditional Mixup labels cannot correctly describe the augmented data.…”
Section: Introductionmentioning
confidence: 99%
“…A S a strong augmentation technique in supervised learning, Mixup has empirically and theoretically been proved to boost the performance of neural networks with its regularization power [2], [3], [4]. Despite its reliable performance, Mixup is also reported to strengthen deep models with better calibration [5], robustness [6], [7] and generalization [6], thus being widely used in adversarial training [4], domain adaptation [8], imbalance problems [9] and so on. However, as Mixup-style training depends heavily on data properties [10], on certain cases, chances are that traditional Mixup labels cannot correctly describe the augmented data.…”
Section: Introductionmentioning
confidence: 99%