2020
DOI: 10.1007/s10489-020-01894-y
|View full text |Cite
|
Sign up to set email alerts
|

Pruning filters with L1-norm and capped L1-norm for CNN compression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 125 publications
(15 citation statements)
references
References 25 publications
0
9
0
Order By: Relevance
“…Inference-time pruning To learn compact structures via training, training based pruning adds various sparsity regularization to the training loss, such as the 1 -norm based [16,24] importance criteria. In addition, Taylor expansion has been adopted in [22,33] as importance criteria to minimize the loss change caused by pruning.…”
Section: Related Workmentioning
confidence: 99%
“…Inference-time pruning To learn compact structures via training, training based pruning adds various sparsity regularization to the training loss, such as the 1 -norm based [16,24] importance criteria. In addition, Taylor expansion has been adopted in [22,33] as importance criteria to minimize the loss change caused by pruning.…”
Section: Related Workmentioning
confidence: 99%
“…After that, we introduce tactics for feature map selection with filter pruning. Recently, filter pruning has been in the attention of researchers [13] that calculates the importance of the filter by capped L1-norm, scale factors, or Shannon entropy. The L1-norm regularization assists two tasks: 1.…”
Section: Methodsmentioning
confidence: 99%
“…With some convolutional layers, pruning only 10% of information can achieve over 70% reduction in filters. 93.25% 93.30% 1.5 x 10 7 5.4 x 10 6 64.0% Slimming [31] 93.66% 93.80% --88.5% Entropy [43] 93.72% 93.97% 1.5 × 10 7 3.5 × 10 5 76.4% Aketi [19] 93.75% 93.80% --90.5% Kumar [13] 93.77% 93.81%…”
Section: Vgg16 On Cifar10 Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…To further suppress the impact of outliers or the noise, capped L1-norm has been proposed as a more robust distance metric and has been implemented in many works, including dictionary learning [13], matrix recovery [26], PCA [33], convolutional neural networks [14], etc. Motivated by the successful application of the capped L1-norm, we apply the capped L1-norm to PTSVM and propose a more robust version of PTSVM, capped L1-norm projection twin support vector machine (CPTSVM).…”
mentioning
confidence: 99%