2022
DOI: 10.1155/2022/7775419
|View full text |Cite
|
Sign up to set email alerts
|

Differentiable Network Pruning via Polarization of Probabilistic Channelwise Soft Masks

Abstract: Channel pruning has been demonstrated as a highly effective approach to compress large convolutional neural networks. Existing differentiable channel pruning methods usually use deterministic soft masks to scale the channelwise outputs and explore an appropriate threshold on the masks to remove unimportant channels, which sometimes causes unexpected damage to the network accuracy when there are no sweet spots that clearly separate important channels from redundant ones. In this article, we introduce a new diff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 35 publications
(38 reference statements)
0
1
0
Order By: Relevance
“…Common lightweighting methods include model pruning, quantization, distillation, etc. These methods can reduce model size and computation by removing unnecessary parameters, reducing model precision, and model compression [6].…”
Section: Technical Concepts Related To Neural Networkmentioning
confidence: 99%
“…Common lightweighting methods include model pruning, quantization, distillation, etc. These methods can reduce model size and computation by removing unnecessary parameters, reducing model precision, and model compression [6].…”
Section: Technical Concepts Related To Neural Networkmentioning
confidence: 99%