2022
DOI: 10.48550/arxiv.2203.06844
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RecursiveMix: Mixed Learning with History

Abstract: Mix-based augmentation has been proven fundamental to the generalization of deep vision models. However, current augmentations only mix samples at the current data batch during training, which ignores the possible knowledge accumulated in the learning history. In this paper, we propose a recursive mixed-sample learning paradigm, termed "RecursiveMix" (RM), by exploring a novel training strategy that leverages the historical input-prediction-label triplets. More specifically, we iteratively resize the input ima… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 51 publications
0
4
0
Order By: Relevance
“…Nguyen et al [137] solve the issue of network overfitting in noisy datasets by filtering out incorrect labels during training and using EMA to ensemble historical predictions. Different from above work, Yang et al [133] propose RecursiveMix, which takes the prediction from historical input as the target and aligns it with the semantics of the corresponding ROI area in the current training input. Intermediate Feature Representation.…”
Section: Aspect Of Historical Typementioning
confidence: 99%
See 2 more Smart Citations
“…Nguyen et al [137] solve the issue of network overfitting in noisy datasets by filtering out incorrect labels during training and using EMA to ensemble historical predictions. Different from above work, Yang et al [133] propose RecursiveMix, which takes the prediction from historical input as the target and aligns it with the semantics of the corresponding ROI area in the current training input. Intermediate Feature Representation.…”
Section: Aspect Of Historical Typementioning
confidence: 99%
“…Last, the fourth aspect focuses on employing historical input to perform data augmentation. Yang et al [133] zoom and paste the historical input onto the new round of training images to make mixed-sample data augmentation.…”
Section: Aspect Of Functional Partmentioning
confidence: 99%
See 1 more Smart Citation
“…Later methods including Puzzle Mix , Salien-cyMix (Uddin et al, 2020) and Attentive CutMix leverage the salient regions for informative mixture generation. Recently, Yang et al (2022) proposed a RecursiveMix strategy which employs the historical input-prediction-label triplets for scale-invariant feature learning. Despite the better performance, a drawback of these methods is the heavily increased training cost due to the saliency extraction or historical information exploitation.…”
Section: Data Mixing Strategymentioning
confidence: 99%