2020
DOI: 10.48550/arxiv.2009.09929
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CVPR 2020 Continual Learning in Computer Vision Competition: Approaches, Results, Current Challenges and Future Directions

Abstract: In the last few years, we have witnessed a renewed and fast-growing interest in continual learning with deep neural networks with the shared objective of making current AI systems more adaptive, efficient and autonomous. However, despite the significant and undoubted progress of the field in addressing the issue of catastrophic forgetting, benchmarking different continual learning approaches is a difficult task by itself. In fact, given the proliferation of different settings, training and evaluation protocols… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…In recent years, an increasing number of continual learning methods have been proposed in various subareas of computer vision, as shown in Figure 1. Additionally, several competitions [9,10] related to continual learning in computer vision have been held in both 2020 and 2021. Hence, in this paper, we present an overview of the recent advances of continual learning in computer vision.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, an increasing number of continual learning methods have been proposed in various subareas of computer vision, as shown in Figure 1. Additionally, several competitions [9,10] related to continual learning in computer vision have been held in both 2020 and 2021. Hence, in this paper, we present an overview of the recent advances of continual learning in computer vision.…”
Section: Introductionmentioning
confidence: 99%
“…Concerning these latter scenarios, memory-based approaches, which preserve samples from previous tasks for replaying, perform better than regularization techniques, which simply address catastrophic forgetting by imposing constraints on the network parameter update at low memory cost [13]- [15]. This finding was confirmed during the recent CL competition at CVPR2020 [16], where the best entry leveraged on rehearsal based strategies.…”
Section: Introductionmentioning
confidence: 87%
“…Among these groups, rehearsal CL strategies have emerged as the most effective to deal with catastrophic forgetting, at the cost of an additional replay memory [1], [27], [28]. In the recent CL challenge at CVPR2020 on the Core50 image dataset, ∼90% of the competitors used rehearsal strategies [16]. The best entry of the more challenging New Instances and Classes track (the same scenario considered in our work) [17], which is evaluated in terms of test accuracy but also memory and computation requirements, scores 91% by replaying image data.…”
Section: A Memory-efficient Continual Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…The size of the mini-batch retrieved from the memory buffer is also set to 10, irrespective of the size of the memory buffer as in [34]. Note that with techniques such as transfer learning (e.g., using a pre-trained model from ImageNet), data augmentation and deeper network architectures, it is possible to achieve much higher performance in this setting [109]. However, since those techniques are orthogonal to our investigation and deviate from the simpler experimental settings of other papers we cite and compare, we do not use them in our experiments.…”
Section: Domain Incremental Datasetsmentioning
confidence: 99%