2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00121
|View full text |Cite
|
Sign up to set email alerts
|

Generative Feature Replay For Class-Incremental Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
100
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 106 publications
(100 citation statements)
references
References 25 publications
0
100
0
Order By: Relevance
“…As storage is limited and Generative Adversarial Networks (GANs) develope, generative replay [46] is proposed as a memory of previous data and its feasibility has been validate on several works [47]- [51]. Although quality of the generation model is a bottleneck, many works [47], [50], [52]- [55] have proved that an elaborately designed generation model practically outperforms mainstream lifelong methods such as EWC, LwF, MAS [56], PathNet [57], and iCaRL [44] et al…”
Section: Related Workmentioning
confidence: 99%
“…As storage is limited and Generative Adversarial Networks (GANs) develope, generative replay [46] is proposed as a memory of previous data and its feasibility has been validate on several works [47]- [51]. Although quality of the generation model is a bottleneck, many works [47], [50], [52]- [55] have proved that an elaborately designed generation model practically outperforms mainstream lifelong methods such as EWC, LwF, MAS [56], PathNet [57], and iCaRL [44] et al…”
Section: Related Workmentioning
confidence: 99%
“…The generator is trained to output instances that follow the same data distribution such that when a new task arrives, the solver trains on the synthetic data outputted by the generator along with the new data in order to alleviate catastrophic forgetting. Despite their flexibility and biological appeal, Generative Replay-based methods have three major disadvantages, namely, training a generative model on a stream of changing synthetic and real data is challenging [28], GR models tend to fall short when dealing with complex datasets [1], and the time required to train the model on a new task increases linearly since the model has to generate and rehearse 𝑡 − 1 tasks. Many variants have been proposed to address these issues.…”
Section: Continual Learningmentioning
confidence: 99%
“…Many variants have been proposed to address these issues. Conditional GANs operate in constant time but have lower accuracy [25], while others depend on non-incremental pre-trained networks [28,47].…”
Section: Continual Learningmentioning
confidence: 99%
“…This type of methods prevent forgetting by including data from previous tasks, stored either in an episodic memory or via a generative model. There are two main strategies: exemplar rehearsal [4,9,16,27,32] and pseudo-rehearsal [29,31]. The former stores a small amount of training samples (also called exemplars) from previous tasks.…”
Section: Incremental Learningmentioning
confidence: 99%