2020
DOI: 10.1109/access.2020.3031031
|View full text |Cite
|
Sign up to set email alerts
|

Filter Pruning and Re-Initialization via Latent Space Clustering

Abstract: Filter pruning is prevalent for pruning-based model compression. Most filter pruning methods have two main issues: 1) the pruned network capability depends on that of source pretrained models, and 2) they do not consider that filter weights follow a normal distribution. To address these issues, we propose a new pruning method employing both weight re-initialization and latent space clustering. For latent space clustering, we define filters and their feature maps as vertices and edges to be a graph, transformed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…Diversity promoting strategies have been widely used in ensemble learning (Li, Yu, and Zhou 2012;Yu, Li, and Zhou 2011), sampling (Bıyık et al 2019;Derezinski, Calandriello, and Valko 2019;Gartrell et al 2019), energybased models (Laakom et al 2021b;Zhao, Mathieu, andLe-Cun 2017), ranking (Gan et al 2020;Yang, Gkatzelis, and Stoyanovich 2019), pruning by reducing redundancy (He et al 2019;Kondo and Yamauchi 2014;Lee et al 2020;Singh et al 2020), and semi-supervised learning (Zbontar et al 2021). In the deep learning context, various approaches have used diversity as a direct regularizer on top of the weight parameters.…”
Section: Related Workmentioning
confidence: 99%
“…Diversity promoting strategies have been widely used in ensemble learning (Li, Yu, and Zhou 2012;Yu, Li, and Zhou 2011), sampling (Bıyık et al 2019;Derezinski, Calandriello, and Valko 2019;Gartrell et al 2019), energybased models (Laakom et al 2021b;Zhao, Mathieu, andLe-Cun 2017), ranking (Gan et al 2020;Yang, Gkatzelis, and Stoyanovich 2019), pruning by reducing redundancy (He et al 2019;Kondo and Yamauchi 2014;Lee et al 2020;Singh et al 2020), and semi-supervised learning (Zbontar et al 2021). In the deep learning context, various approaches have used diversity as a direct regularizer on top of the weight parameters.…”
Section: Related Workmentioning
confidence: 99%
“…In the context of supervised neural networks, it has been shown that reducing the correlation improves generalization [20], [21], [22]. Approaches helping to reduce the redundancy have been successfully applied, e.g., for network pruning [23], [24], [25], [26] and self-supervised learning [27]. In this paper, we propose to model the feature redundancy in the bottleneck representation and minimize it explicitly.…”
Section: Introductionmentioning
confidence: 99%