2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01365
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Feature Transformations for Discriminative and Generative Continual Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 42 publications
(15 citation statements)
references
References 19 publications
0
15
0
Order By: Relevance
“…Singh et al [152] further extended [151] to be used for both zero-shot and non-zero-shot continual learning. Verma et al [153] proposed an Efficient Feature Transformation (EFT) to separate the shared and the task-specific features using Efficient Convolution Operations [154].…”
Section: Dynamic Network Architecturesmentioning
confidence: 99%
“…Singh et al [152] further extended [151] to be used for both zero-shot and non-zero-shot continual learning. Verma et al [153] proposed an Efficient Feature Transformation (EFT) to separate the shared and the task-specific features using Efficient Convolution Operations [154].…”
Section: Dynamic Network Architecturesmentioning
confidence: 99%
“…They also proposed a less-forget constraint to retain the features of the old samples and the parameters of the classifier by limiting the angle of the features of the sample on the old and new models. Verma et al [19] applied a Kullback-Leibler divergence loss function to expand the decision boundaries between the features in the data of the given task. In this work, the inter-class information is used to constrain the update of model parameters to ensure that the model can grasp the multi-granularity correlations of classes to learn old and new classes effectively.…”
Section: Class Incremental Learning Methodsmentioning
confidence: 99%
“…Xiang et al [17] proposed a learning strategy based on conditional adversarial networks using memory-efficient statistical information to store old knowledge and a discriminator to distinguish between real and generated examples. EFTs [18] proposes a task-specific feature mapping transformation strategy that significantly reduces the computational cost of generating models. GANMemory [19] proposes sequential style modulation on a good base GAN model to form a targeted sequential generative model to achieve incremental learning.…”
Section: Related Workmentioning
confidence: 99%
“…Current incremental learning can be broadly classified into three categories depending on whether the training phase requires real old classes data [9], [10], [11], [12], [13] or generated old classes data [14], [15], [16], [17], [18], [19], or does not use any old classes data [20], [21], [22], [23], [24], [25], [26].…”
mentioning
confidence: 99%