2021
DOI: 10.48550/arxiv.2108.01005
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sequoia: A Software Framework to Unify Continual Learning Research

Abstract: The field of Continual Learning (CL) seeks to develop algorithms that accumulate knowledge and skills over time through interaction with nonstationary environments and data distributions. Measuring progress in CL can be difficult because a plethora of evaluation procedures (settings) and algorithmic solutions (methods) have emerged, each with their own potentially disjoint set of assumptions about the CL problem. In this work, we view each setting as a set of assumptions. We then create a tree-shaped hierarchy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…Those two settings are also known as respectively class-incremental and instance-incremental [5,6]. The objectives to be optimized may change over time as well [3], as in continual RL [7,8,9].…”
Section: Data Distribution Driftsmentioning
confidence: 99%
See 1 more Smart Citation
“…Those two settings are also known as respectively class-incremental and instance-incremental [5,6]. The objectives to be optimized may change over time as well [3], as in continual RL [7,8,9].…”
Section: Data Distribution Driftsmentioning
confidence: 99%
“…Thus, creating diverse benchmarks, as well as approaches that do not critically rely on the assumptions from the default scenario, should be an ongoing effort. This effort should be pushed notably by existing continual learning libraries such as Continuum [12], Avalanche [13] or Sequoia [7].…”
Section: Sub-task Onsetsmentioning
confidence: 99%
“…Additionally, Task-Free or Task-Agnostic CL [22,23] represents an additional scenario for when the task labels are not given during either training or testing, which makes it the most challenging scheme. For that, the model does not have any information on task boundaries and still needs to deal with data distribution changes.…”
Section: Scenariosmentioning
confidence: 99%
“…Our work is conceptually similar to bsuite [48], which curates a collection of toy, diagnostic experiments to evaluate different capabilities of a standard, non-continual RL agent. Concurrent with our work, Sequoia [46] introduces a software framework with baselines, metrics, and evaluations aimed at unifying research in continual supervised learning and continual reinforcement learning. In contrast to both, we present CORA, a platform that focuses on the continual RL setting and introduces challenging task sequences as benchmarks toward improving different aspects of continual RL agents.…”
Section: Evaluating Continual Learningmentioning
confidence: 99%