2020
DOI: 10.1016/j.patrec.2020.03.034
|View full text |Cite
|
Sign up to set email alerts
|

Robust pruning for efficient CNNs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(4 citation statements)
references
References 2 publications
0
4
0
Order By: Relevance
“…Especially deep learning-based methods need high capacity computing resources due to the high number of parameters. In order to overcome this situation, some studies [22][23][24][25] should be carried out to reduce the number of parameters of deep learning architectures. In this study firstly, the state of the art deep learning models Googlenet [26], Alexnet [27], Vgg19 [28], resnet-50, and Resnet-101 [29] were used to achieve high accuracy in the classification of scenes in aerial images.…”
Section: Introductionmentioning
confidence: 99%
“…Especially deep learning-based methods need high capacity computing resources due to the high number of parameters. In order to overcome this situation, some studies [22][23][24][25] should be carried out to reduce the number of parameters of deep learning architectures. In this study firstly, the state of the art deep learning models Googlenet [26], Alexnet [27], Vgg19 [28], resnet-50, and Resnet-101 [29] were used to achieve high accuracy in the classification of scenes in aerial images.…”
Section: Introductionmentioning
confidence: 99%
“…The trade off between accuracy and number of parameters occur for pursuing efficient CNN. Several researches have introduced various techniques include pruning, regularization, and efficient architectures to produce CNN with low number of parameters without sacrificing accuracy in diverse topics (Yudistira, Widodo, & Rahayudi, 2020) (Hakim, Kavitha, Yudistira, & Kurita, 2021) (Mitsuno & Kurita, 2021) (Ide, Kobayashi, Watanabe, & Kurita, 2020). In this research, we choose three different CNN architectures that will be evaluated to achieve the best results: ResNet-50, Inception-V3, and EfficientNet-B7.…”
Section: Introductionmentioning
confidence: 99%
“…They demonstrated that the feature maps generated at various layers of this model have redundancy and they proposed a singular value decomposition-based framework to reduce redundant feature maps [25]. Ide H. et al proposed a pruning method based on a novel criterion to measure the redundancy of the CNN parameters through the classification loss [26]. Recently, the authors in [27] proposed a kernel-based weight decorrelation approach to regularize the CNNs for better model generalization.…”
Section: Introductionmentioning
confidence: 99%