2019
DOI: 10.1007/s11063-019-10076-y
|View full text |Cite
|
Sign up to set email alerts
|

Methodologies of Compressing a Stable Performance Convolutional Neural Networks in Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 17 publications
0
8
0
Order By: Relevance
“…Works in [1,7,20,26,28,43,50] concentrate on structural pruning, which involves pruning entire channels in filters by means of different techniques. Non-structural pruning can be carried out on pretrained models without retraining as in [37,2], which results in increased efficiency, or with retraining, as in lottery tickets search [15], movement pruning [41,18], and variational dropout [33]. Other efficient pruning methods include genetic-based approaches as in [45], which adopts constant masks and defines its own crossover and mutation operators, and reinforcement techniques as in [17].…”
Section: Related Workmentioning
confidence: 99%
“…Works in [1,7,20,26,28,43,50] concentrate on structural pruning, which involves pruning entire channels in filters by means of different techniques. Non-structural pruning can be carried out on pretrained models without retraining as in [37,2], which results in increased efficiency, or with retraining, as in lottery tickets search [15], movement pruning [41,18], and variational dropout [33]. Other efficient pruning methods include genetic-based approaches as in [45], which adopts constant masks and defines its own crossover and mutation operators, and reinforcement techniques as in [17].…”
Section: Related Workmentioning
confidence: 99%
“…The recent breakthrough in DCNNs has led to improvements in the accuracy of both vision and auditory systems, such as image recognition, object detection and speech recognition [2][3][4][5][6][7]. DCNNs are generally composed of several cascades of basic layers.…”
Section: Algorithmmentioning
confidence: 99%
“…The non-linear neuron is called activation function, which has the ability to solve non-linear problems. Rectified Linear Unit (ReLU) function is the most common one, as shown in equation (2). Paper [26] demonstrates that training convergence can be much faster in the network with ReLU neurons than that with other neurons.…”
Section: Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…In recent years, with the advancement of machine learning, particularly deep learning, it has been a research hotspot that the robot avoids obstacles by self-learning (Z He, et al, 2021) [2]. An end-to-end learning strategy, also known as deep learning, is a mapping relationship between inputs and output that is achieved via the use of a deep learning network.…”
Section: Introductionmentioning
confidence: 99%