2018
DOI: 10.1016/j.compeleceng.2018.01.036
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional neural network simplification via feature map pruning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 41 publications
(23 citation statements)
references
References 8 publications
0
23
0
Order By: Relevance
“…The experimental results show that a network layer can be severely pruned and yet continue to be run in the forward direction with coherent and effective activations in a later layer. Our approach also does not re-train after pruning, unlike many previous work in the space [7], [8], [17], [18]. Therefore this research shows that, while later layers are directly impacted by the complete removal of early features, the removal of up to 50% of these early features does not cause a catastrophic collapse of activation strength in later layers.…”
Section: Discussionmentioning
confidence: 76%
See 1 more Smart Citation
“…The experimental results show that a network layer can be severely pruned and yet continue to be run in the forward direction with coherent and effective activations in a later layer. Our approach also does not re-train after pruning, unlike many previous work in the space [7], [8], [17], [18]. Therefore this research shows that, while later layers are directly impacted by the complete removal of early features, the removal of up to 50% of these early features does not cause a catastrophic collapse of activation strength in later layers.…”
Section: Discussionmentioning
confidence: 76%
“…Li et al uses the absolute magnitude of weights within each feature map as the pruning metric, with low magnitude maps being removed [17]. An alternate approach is to perform Linear Discriminant Analysis, to preserve the class structure in the final layer while reducing the number of feature maps in earlier layers [8]. A recent work, which is currently under review, uses structured sparsity regularization and Stochastic Gradient Descent to prune unnecessary feature maps [18].…”
Section: Related Workmentioning
confidence: 99%
“…is research uses VGG-16 for feature extraction. After visualizing the convolutional layer through feature map [38], the feature map of each channel can be obtained, and each channel is fused according to 1 : 1 to obtain the fused feature map. Figure 6 shows the convolutional layer map after channel fusion.…”
Section: Traditional Shallow Expression Featuresmentioning
confidence: 99%
“…How many convolution kernels are there in each convolution layer will get the characteristics of how many channels. After visualizing the convolution layer through feature map [12], we can get the characteristic map of each channel and fuse each channel according to 1:1 to get the fused characteristic map, as shown in Figure 2. Because there are many convolution layers in Inception v3, there are also many convolution kernels in each layer, that is, many channels.…”
Section: Figure 1 Tensorboard Visualization Nodementioning
confidence: 99%