2013
DOI: 10.1609/aaai.v27i1.8684
|View full text |Cite
|
Sign up to set email alerts
|

Active Task Selection for Lifelong Machine Learning

Abstract: In a lifelong learning framework, an agent acquires knowledge incrementally over consecutive learning tasks, continually building upon its experience. Recent lifelong learning algorithms have achieved nearly identical performance to batch multi-task learning methods while reducing learning time by three orders of magnitude. In this paper, we further improve the scalability of lifelong learning by developing curriculum selection methods that enable an agent to actively select the next task to learn in order to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
36
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 41 publications
(52 citation statements)
references
References 11 publications
0
36
0
Order By: Relevance
“…The order in which tasks are encountered does impact learning, and several groups have investigated the effects of ordering (Bengio et al 2009;Ruvolo and Eaton 2013a;Narvekar et al 2016). For our experiments, we selected a random task ordering that demonstrates forgetting and held it fixed for all experiments.…”
Section: Methodsmentioning
confidence: 99%
“…The order in which tasks are encountered does impact learning, and several groups have investigated the effects of ordering (Bengio et al 2009;Ruvolo and Eaton 2013a;Narvekar et al 2016). For our experiments, we selected a random task ordering that demonstrates forgetting and held it fixed for all experiments.…”
Section: Methodsmentioning
confidence: 99%
“…However, ELLA [32] was introduced as a general lifelong learning algorithm that operates within a multi-task learning framework, allowing the model to learn multiple basic learning models continuously. This led to the development of probability-based [33] and non-parametric Bayesian methods [34], which share information between tasks through linear combinations of basis vectors to enhance model performance. However, these methods have limitations in terms of the types of learning tasks they can handle.…”
Section: Related Workmentioning
confidence: 99%
“…However, none of these studies consider the problem of meta-learning with a fixed budget. A few studies have looked into actively choosing the next task in a sequence of tasks (Ruvolo and Eaton 2013), (Pentina, Sharmanska, and Lampert 2015), (Pentina and Lampert 2017), (Sun, Cong, and Xu 2018), but they do not look at how to distribute data across tasks.…”
Section: Related Workmentioning
confidence: 99%