2022
DOI: 10.1007/978-3-031-19809-0_13
|View full text |Cite
|
Sign up to set email alerts
|

Balancing Stability and Plasticity Through Advanced Null Space in Continual Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…Advanced Null Space (AdNS) is the most similar work to ours, which addresses the overfitting problem by a similar regularization construction in a multiclass continual learning scenario [23]. However, their approach differs in that there is no general rule for setting the regularization strength, and each task has a separate non-orthogonalized multiclass classifier, possibly degrading the performance on the base classes.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Advanced Null Space (AdNS) is the most similar work to ours, which addresses the overfitting problem by a similar regularization construction in a multiclass continual learning scenario [23]. However, their approach differs in that there is no general rule for setting the regularization strength, and each task has a separate non-orthogonalized multiclass classifier, possibly degrading the performance on the base classes.…”
Section: Related Workmentioning
confidence: 99%
“…The novel class is then added using the imprinting technique [30]. This technique computes the features of the available novel training examples for the last layer ϕ L (•) and uses the arithmetic average as the new weight vector: [13] 52.1±11.3 -2.5±01.9 77.8±11.0 00.0±00.0 91.7±07.2 00.0±00.0 60.6±16.9 00.0±00.0 66.7±37.2 -2.4±01.8 59.0±07.9 00.0±00.0 100.0±00.0 00.0±00.0 72.8±22.4 -0.7±01.5 AdNS [23] 100.0±00.0 -88.9±00.0 93.8±04.2 00.0±00.0 100.0±00.0 -3.2±00.0 100.0±00.0 -96.6±00.0 100.0±00.0 -6.0±04.1 76.9±07.7 -2.2±00.0 100.0±00.0 -3.6±00.0 96.1±08.2 -32.5±42.9 ProtoNet [2] 37.5±06.2 00.0±00.0 47.2±12.7 00.0±00.0 86.1±09.6 00.0±00.0 60.6±18.9 00.0±00.0 66.7±41.6 -3.6±00.0 56.4±04.4 00.0±00.0 98.1±03.2 03.6±00.0 64.7±25.5 00.0±02.0 iCaRL [8] 35.4±13.0 -3.7±03.7 80.6±04.8 00.0±00.0 88.9±04.8 00.0±00.0 72.7±00.0 00.0±00.0 86.7±11.5 -9.5±05.5 51.3±08.9 -7.2±01.3 100.0±00.0 -36.9±02.1 73.6±22.6 -8.2±12.7 Finetune 58.3±03.6 -3.7±00.0 52.8±09.6 00.0±00.0 100.0±00.0 00.0±00.0 75.8±10.5 00.0±00.0 80.0±34.6 -8.3±05.5 61.5±07.7 00.0±00.0 100.0±00.0 -0.0±03.6 75.5±21.9 -1.7±03.7…”
Section: Null-space Initialization Of the Output Layermentioning
confidence: 99%
“…Continual Learning (CL) studies the problem of learning new knowledge incrementally while preserving previously learned knowledge. Following [10], CL methods can be divided into three groups: rehearsal-based methods [30,44,36,3,37,11], regularizationbased methods [18,20,22,19,42], and parameter isolation methods [31,32,28]. However, the majority of existing CL methods focus on supervised settings while Continual Self-Supervised Continual Learning (CSSL) is surprisingly under-investigated.…”
Section: Continual Learningmentioning
confidence: 99%