2021
DOI: 10.1109/tpami.2021.3069908
|View full text |Cite
|
Sign up to set email alerts
|

A Survey on Curriculum Learning

Abstract: Curriculum learning (CL) is a training strategy that trains a machine learning model from easier data to harder data, which imitates the meaningful learning order in human curricula. As an easy-to-use plug-in tool, the CL strategy has demonstrated its power in improving the generalization capacity and convergence rate of various models in a wide range of scenarios such as computer vision and natural language processing, etc. In this survey article, we comprehensively review CL from various aspects including mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
128
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
3
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 281 publications
(131 citation statements)
references
References 126 publications
(197 reference statements)
3
128
0
Order By: Relevance
“…Meanwhile, several studies have proposed similar concerns. Wang et al (2021b) raised a similar question about "easy-first versus hard-first" under the context of curriculum learning. This paper explores this question from a global perspective, obtaining reasonable findings.…”
Section: Existing Weighting Schemesmentioning
confidence: 99%
“…Meanwhile, several studies have proposed similar concerns. Wang et al (2021b) raised a similar question about "easy-first versus hard-first" under the context of curriculum learning. This paper explores this question from a global perspective, obtaining reasonable findings.…”
Section: Existing Weighting Schemesmentioning
confidence: 99%
“…Curriculum learning (CL) [2,40] is a training strategy that trains models initially on easy data and then on harder data, imitating how human students are taught with curricula. It could help gradientbased models to escape local minimum to some extent in lowervariance gradient directions [42].…”
Section: Curriculum Learning and Hard Sample Miningmentioning
confidence: 99%
“…In each stage, we first update the meta-learner in a MAML-like manner, and then conditionally re-sample a new batch of tasks containing the most difficult users (Stage 1) and cities (Stage 2) under a loss-based criterion. Secondly, we further propose to draw upon the curriculum learning [40] strategies to help improve the convergence rate and generalization capacity of the meta-learner under the condition of high city-level diversity, which boosts the transfer performance when the amount of base cities is relatively large (and the diversity of these cities is thus relatively high). The basic idea is to present training tasks for the meta-learner in an easier first, harder later paradigm with the difficulties judged by a teacher, in order to help the meta-learner converge to a better state.…”
Section: Introductionmentioning
confidence: 99%
“…it believes that the expense of labeling operations mainly hinges on the total of queries, hence the objective is to reduce the number of queries as possible [60]. In contrast, deep semisupervised learning can be regarded as a learning paradigm that combines the advantages of both supervised learning and unsupervised learning by allowing the deep learning model to be trained using both unlabeled and labeled samples, without any human interventions [61,62]. Figure 1.9 display the workflow of the semisupervised deep learning model.…”
Section: Deep Weakly Supervised Learningmentioning
confidence: 99%