2021
DOI: 10.1007/978-3-030-67664-3_17
|View full text |Cite
|
Sign up to set email alerts
|

Topological Insights into Sparse Neural Networks

Abstract: Sparse neural networks are effective approaches to reduce the resource requirements for the deployment of deep neural networks. Recently, the concept of adaptive sparse connectivity, has emerged to allow training sparse neural networks from scratch by optimizing the sparse structure during training. However, comparing different sparse topologies and determining how sparse topologies evolve during training, especially for the situation in which the sparse structure optimization is involved, remain as challengin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
14
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
2
1
1

Relationship

6
2

Authors

Journals

citations
Cited by 17 publications
(15 citation statements)
references
References 10 publications
1
14
0
Order By: Relevance
“…This indicates that in the case of RNNs there exist many low-dimensional sub-networks that can achieve similarly low loss. This phenomenon complements the findings of Liu et al (2020c) which shows that there are numerous sparse sub-networks performing similarly well in the context of sparse MLPs.…”
Section: Analysis Of Evolutionary Trajectory Of Dynamic Sparse Trainingsupporting
confidence: 86%
“…This indicates that in the case of RNNs there exist many low-dimensional sub-networks that can achieve similarly low loss. This phenomenon complements the findings of Liu et al (2020c) which shows that there are numerous sparse sub-networks performing similarly well in the context of sparse MLPs.…”
Section: Analysis Of Evolutionary Trajectory Of Dynamic Sparse Trainingsupporting
confidence: 86%
“…Reference [35] proposed a method to measure the distance between sparse connectivities obtained with dynamic sparse training from the perspective of graph theory. They empirically show that there are many different sparse connectivities achieving equally good performance.…”
Section: Sparse Neural Network Throughout Trainingmentioning
confidence: 99%
“…initialized from a standard normal distribution. The random addition of new connections do not have a high risk of not finding good sparse connectivity at the end of the training process because it has been shown in (Liu et al 2020) that sparse training can unveil a vast number of very different sparse connectivity local optima which achieve very similar performance.…”
Section: Training Proceduresmentioning
confidence: 99%