2021
DOI: 10.3390/rs14010161
|View full text |Cite
|
Sign up to set email alerts
|

A Lightweight Convolutional Neural Network Based on Group-Wise Hybrid Attention for Remote Sensing Scene Classification

Abstract: With the development of computer vision, attention mechanisms have been widely studied. Although the introduction of an attention module into a network model can help to improve e classification performance on remote sensing scene images, the direct introduction of an attention module can increase the number of model parameters and amount of calculation, resulting in slower model operations. To solve this problem, we carried out the following work. First, a channel attention module and spatial attention module… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 52 publications
(108 reference statements)
0
10
0
Order By: Relevance
“…As far as the number of parameters is concerned, the parameters of the proposed method are 0.6 M, which is 9.6% and 10% of that of the lightweight methods LCNN-BFF [32] and Skip-Connected CNN [42]. Compared with LCNN-GWHA [55], although the number of parameters is slightly increased, the classification accuracy of the proposed method has obvious advantages under both training ratios. [43] 90.82 ± 0.16 96.89 ± 0.10 130 M Fine-tuning [28] 86.59 ± 0.29 89.64 ± 0.36 130 M Skip-Connected CNN [42] 91.10 ± 0.15 93.30 ± 0.13 6 M LCNN-BFF [32] 91.66 ± 0.48 94.64 ± 0.16 6.2 M Gated Bidirectiona [40] 90.…”
Section: Experimental Results On Aid Datasetmentioning
confidence: 99%
See 3 more Smart Citations
“…As far as the number of parameters is concerned, the parameters of the proposed method are 0.6 M, which is 9.6% and 10% of that of the lightweight methods LCNN-BFF [32] and Skip-Connected CNN [42]. Compared with LCNN-GWHA [55], although the number of parameters is slightly increased, the classification accuracy of the proposed method has obvious advantages under both training ratios. [43] 90.82 ± 0.16 96.89 ± 0.10 130 M Fine-tuning [28] 86.59 ± 0.29 89.64 ± 0.36 130 M Skip-Connected CNN [42] 91.10 ± 0.15 93.30 ± 0.13 6 M LCNN-BFF [32] 91.66 ± 0.48 94.64 ± 0.16 6.2 M Gated Bidirectiona [40] 90.…”
Section: Experimental Results On Aid Datasetmentioning
confidence: 99%
“…Firstly, a gradient-weighted class activation map (Grad CAM) is utilized to visualize the proposed method. In order to prove that the proposed method can extract the significant features of remote-sensing images effectively, two methods, LCNN-BFF [32] and LCNN-GWHA [55], which have good classification performance, are chosen in the experiment for visual analysis on the UCM datasets. The visualization results are shown in Figure 14.…”
Section: Visual Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…Figure 7 examines the comparison study of the FCMBS-RSIC technique with recent methods [ 23 ] under training/testing (80 : 20) data of UCM21 dataset. The experimental results revealed that the D-CNN, SC-CNN, and VGG-VD16-SAFF techniques have gained ineffective outcomes with the least values of prec n , reca l , and accu y .…”
Section: Experimental Validationmentioning
confidence: 99%