2021
DOI: 10.48550/arxiv.2110.05481
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Which Samples Should be Learned First: Easy or Hard?

Abstract: An effective weighting scheme for training samples is essential for learning tasks. Numerous weighting schemes have been proposed. Some schemes take the easyfirst mode on samples, whereas some others take the hard-first mode. Naturally, an interesting yet realistic question is raised. Which samples should be learned first given a new learning task, easy or hard? To answer this question, three aspects of research are carried out. First, a high-level unified weighted loss is proposed, providing a more comprehens… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 43 publications
1
3
0
Order By: Relevance
“…Finally, we note that some researchers suggest that better results can be obtained by training with hard samples first [32]. These authors demonstrate this for imbalanced datasets, where they define "hard" as the same as rare examples.…”
Section: Datasupporting
confidence: 53%
See 1 more Smart Citation
“…Finally, we note that some researchers suggest that better results can be obtained by training with hard samples first [32]. These authors demonstrate this for imbalanced datasets, where they define "hard" as the same as rare examples.…”
Section: Datasupporting
confidence: 53%
“…A training sample might have distracting backgrounds, corruptions, or a poor foreground/background ratio. For these reasons and more, training samples have been divided into categories of easy/medium/hard [32,33]. This has led to the research question: Which training samples should be learned first and then in what order?…”
Section: Datamentioning
confidence: 99%
“…Besides the complexity in the data, there is also the property difficulty for an ML model of learning certain things [51], so recognizing something in complex sequences can be challenging (cf. section "Complexity").…”
Section: Difficultymentioning
confidence: 99%
“…Treating each training sample unequally improves the learning performance. Two cues are typically considered in designing the weighting schemes of training samples [1]. The first cue is the application context of learning tasks.…”
Section: Introductionmentioning
confidence: 99%