2021
DOI: 10.1016/j.neucom.2021.01.078
|View full text |Cite
|
Sign up to set email alerts
|

SpaceNet: Make Free Space for Continual Learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 42 publications
(30 citation statements)
references
References 10 publications
0
30
0
Order By: Relevance
“…Xu and Zhu [50] propose reinforcement learning for selecting the optimal architecture to learn each new task. The work in [45] trains sparse deep neural networks and compresses the sparse connections of each task in the network. The work in [48] expands the network for new tasks while preserving previous task knowledge and incorporating the topology and attribute of network nodes.…”
Section: Related Workmentioning
confidence: 99%
“…Xu and Zhu [50] propose reinforcement learning for selecting the optimal architecture to learn each new task. The work in [45] trains sparse deep neural networks and compresses the sparse connections of each task in the network. The work in [48] expands the network for new tasks while preserving previous task knowledge and incorporating the topology and attribute of network nodes.…”
Section: Related Workmentioning
confidence: 99%
“…Then, few unimportant parameters are used to learn each task. SpaceNet (Sokar et al, 2021c) learns sparse sub-network for each task from scratch using dynamic sparse training (Mocanu et al, 2018;Hoefler et al, 2021) where the weights and the sparse topology are optimized simultaneously.…”
Section: Related Workmentioning
confidence: 99%
“…In this paper, we analyze the recent task-specific components method, SpaceNet (Sokar et al, 2021c). The motivations for choosing SpaceNet are: (1) unlike most task-specific component methods, it does not rely on task identity; making it applicable for class-IL.…”
Section: Spacenetmentioning
confidence: 99%
See 2 more Smart Citations