2018
DOI: 10.48550/arxiv.1810.13166
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Don't forget, there is more than forgetting: new metrics for Continual Learning

Abstract: Continual learning consists of algorithms that learn from a stream of data/tasks continuously and adaptively thought time, enabling the incremental development of ever more complex knowledge and skills. The lack of consensus in evaluating continual learning algorithms and the almost exclusive focus on forgetting motivate us to propose a more comprehensive set of implementation independent metrics accounting for several factors we believe have practical implications worth considering in the deployment of real A… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(31 citation statements)
references
References 9 publications
0
31
0
Order By: Relevance
“…We also consider training time as a significant evaluation factor [11], especially in real-time applications. Besides, algorithm ranking metrics are proposed according to different desiderata in [4], including accuracy, forward/backward transfer, model size efficiency, sample storage size efficiency, computational efficiency. Díaz-Rodríguez et al fuse these desiderata to a single CL score for ranking purposes.…”
Section: Research Questionsmentioning
confidence: 99%
“…We also consider training time as a significant evaluation factor [11], especially in real-time applications. Besides, algorithm ranking metrics are proposed according to different desiderata in [4], including accuracy, forward/backward transfer, model size efficiency, sample storage size efficiency, computational efficiency. Díaz-Rodríguez et al fuse these desiderata to a single CL score for ranking purposes.…”
Section: Research Questionsmentioning
confidence: 99%
“…Several studies suggested that forward knowledge transfer is critical for continual learning [17,4], which might be either positive or negative due to the dynamic data distributions. Although it is highly nontrivial to mitigate potential negative transfer while overcoming catastrophic forgetting, the efforts that specifically consider this challenging issue are limited.…”
Section: Related Workmentioning
confidence: 99%
“…(1) CIFAR-100-SC [34]: CIFAR-100 can be split as 20 superclasses (SC) with 5 classes per superclass dependent on semantic similarity, where each superclass is a classification task. Since the superclasses are semantically different, forward knowledge transfer in such a task sequence is Architecture: We follow [10] to use a CNN architecture with 6 convolution layers and 2 fully connected layers for benchmark (1, 2, 3), and AlexNet [14] for benchmark (4,5). Since continual learning needs to quickly learn a usable model from incrementally collected data, we mainly consider learning the network from scratch.…”
Section: Visual Classification Tasksmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the broad range of continual learning methods remain largely unexplored for knowledge graph embedding. Furthermore, the implications of assumptions in the context of robotics is not well documented due to the definition of different task specific measures and a focus on the final inference performance [15].…”
Section: Introductionmentioning
confidence: 99%