2022
DOI: 10.1016/j.softx.2021.100907
|View full text |Cite
|
Sign up to set email alerts
|

Simplify: A Python library for optimizing pruned neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 4 publications
0
6
0
Order By: Relevance
“…From this, we observe that reducing the complexity of the backbone would result in the overall reduction of the complexity for the entire model N , and towards this end pruning has already proved to be an effective approach [9,10]. Pruning approaches can be divided into two groups.…”
Section: Effect Of Pruned Backbones To Capsule Layersmentioning
confidence: 91%
See 2 more Smart Citations
“…From this, we observe that reducing the complexity of the backbone would result in the overall reduction of the complexity for the entire model N , and towards this end pruning has already proved to be an effective approach [9,10]. Pruning approaches can be divided into two groups.…”
Section: Effect Of Pruned Backbones To Capsule Layersmentioning
confidence: 91%
“…Bragagnolo et al [20] showed that structured sparsity, despite removing significantly less parameters from the model, yields lower model's memory footprint and inference time. When pruning a network in a structured way, a simplification step which practically reduces the rank of the matrices is possible; on the other side, encoding unstructured sparse matrices lead to representation overheads [10].…”
Section: Effect Of Pruned Backbones To Capsule Layersmentioning
confidence: 99%
See 1 more Smart Citation
“…It is worth pointing out that, in order to exploit the structured sparsity introduced in the network, the zeroed neurons have to be removed from the architecture. For this operation we used the Simplify library [20]. Quantization Alongside pruning, quantization is one of the most widely adopted compression methods.…”
Section: Methodsmentioning
confidence: 99%
“…We use the PyTorch method[17] to set to 0 the weights with smaller L 2 norm, and the Simplify library[18] to remove them from the DNN. This article has been accepted for inclusion in a future issue of this journal.…”
mentioning
confidence: 99%