2022
DOI: 10.3390/rs14246245
|View full text |Cite
|
Sign up to set email alerts
|

Absorption Pruning of Deep Neural Network for Object Detection in Remote Sensing Imagery

Abstract: In recent years, deep convolutional neural networks (DCNNs) have been widely used for object detection tasks in remote sensing images. However, the over-parametrization problem of DCNNs hinders their application in resource-constrained remote sensing devices. In order to solve this problem, we propose a network pruning method (named absorption pruning) to compress the remote sensing object detection network. Unlike the classical iterative three-stage pruning pipeline used in existing methods, absorption prunin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 44 publications
0
2
0
Order By: Relevance
“…For example, Wang et al [40] proposed an adaptive feature-aware method, which performs brilliantly in real-time object detection tasks. Wang et al [41] proposed an absorption pruning method for the object detection network in remote-sensing images. Tong et al [42] designed a compound semantic feature fusion method to generate an effective semantic description for better pixel-wise object center point interpretation.…”
Section: Label Assignment Strategiesmentioning
confidence: 99%
“…For example, Wang et al [40] proposed an adaptive feature-aware method, which performs brilliantly in real-time object detection tasks. Wang et al [41] proposed an absorption pruning method for the object detection network in remote-sensing images. Tong et al [42] designed a compound semantic feature fusion method to generate an effective semantic description for better pixel-wise object center point interpretation.…”
Section: Label Assignment Strategiesmentioning
confidence: 99%
“…The authors of [1][2][3] mainly focus on reducing the complexity of the computational backbone or designing a backbone suitable for edge devices. In addition, several methods have been proposed to address this issue, including network pruning [4,5] and model distillation, which have been proven effective at accelerating inference. These methods are not novel methods and can be used to accelerate any model by sacrificing performance.…”
Section: Introductionmentioning
confidence: 99%