2022
DOI: 10.1109/lgrs.2021.3139695
|View full text |Cite
|
Sign up to set email alerts
|

Pairwise Comparison Network for Remote-Sensing Scene Classification

Abstract: Remote sensing scene classification aims to assign a specific semantic label to a remote sensing image. Recently, convolutional neural networks have greatly improved the performance of remote sensing scene classification. However, some confused images may be easily recognized as the incorrect category, which generally degrade the performance. The differences between image pairs can be used to distinguish image categories. This paper proposed a pairwise comparison network, which contains two main steps: pairwis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…The experiment was split into two parts, using 50% of the training samples or 20% of the test samples. Table 3 [40] 89.64 ± 0.36 86.59 ± 0.29 CRAN [42] 96.65 ± 0.20 95.24 ± 0.16 MobileNet V2 [43] 95.96 ± 0.27 94.13 ± 0.28 SE-MDPMNet [44] 97.14 ± 0.15 94.68 ± 0.07 Two-Stream Fusion [45] 94.58 ± 0.25 92.32 ± 0.41 ViT [4] 96.88 ± 0.19 95.58 ± 0.18 CFDNN [46] 96.56 ± 0.24 94.56 ± 0.24 Inception-v3-CapsNet [18] 96.32 ± 0.12 93.79 ± 0.13 GSSF [47] 97.65 ± 0.80 95.71 ± 0.22 PCNet [48] 96.76 ± 0.25 95.53 ± 0.16 GAN [26] 96. 45 Table 3 shows that FCIHMRT produced the greatest outcomes at both 50% and 20% training rates.…”
Section: Results Using Aidmentioning
confidence: 99%
See 2 more Smart Citations
“…The experiment was split into two parts, using 50% of the training samples or 20% of the test samples. Table 3 [40] 89.64 ± 0.36 86.59 ± 0.29 CRAN [42] 96.65 ± 0.20 95.24 ± 0.16 MobileNet V2 [43] 95.96 ± 0.27 94.13 ± 0.28 SE-MDPMNet [44] 97.14 ± 0.15 94.68 ± 0.07 Two-Stream Fusion [45] 94.58 ± 0.25 92.32 ± 0.41 ViT [4] 96.88 ± 0.19 95.58 ± 0.18 CFDNN [46] 96.56 ± 0.24 94.56 ± 0.24 Inception-v3-CapsNet [18] 96.32 ± 0.12 93.79 ± 0.13 GSSF [47] 97.65 ± 0.80 95.71 ± 0.22 PCNet [48] 96.76 ± 0.25 95.53 ± 0.16 GAN [26] 96. 45 Table 3 shows that FCIHMRT produced the greatest outcomes at both 50% and 20% training rates.…”
Section: Results Using Aidmentioning
confidence: 99%
“…Meanwhile, it was demonstrated that the OA of palace scenes was reduced to 80%, in which some palace scenes were categorized into the church intersection and island classes, indicating that the classification capacity of FCIHMRT still requires improvement for similar scenes but in general, it can distinguish different scenes with rich spatial information. [40] 79.79 ± 0.15 76.47 ± 0.18 CRAN [42] 94.07 ± 0.08 91.28 ± 0.19 MobileNet V2 [43] 83.26 ± 0.17 80.32 ± 0.16 SE-MDPMNet [44] 94.11 ± 0.03 91.80 ± 0.07 Two-Stream Fusion [45] 83.16 ± 0.18 80.22 ± 0.22 ViT [4] 94.50 ± 0.18 91.17 ± 0.13 CFDNN [46] 93.83 ± 0.09 91.17 ± 0.13 Inception-v3-CapsNet [18] 92.6 ± 0.11 89.03 ± 0.21 GSSF [47] 94.48 ± 0.26 91.98 ± 0.19 PCNet [48] 94.59 ± 0.07 92.64 ± 0.13 GAN [26] 93.63 ± 0.12 91.06 ± 0. classification accuracy of the golf course and mobile home park classes reached 99%, which indicates that FCIHMRT has a good classification performance for scenes with a small feature complexity. Meanwhile, it was demonstrated that the OA of palace scenes was reduced to 80%, in which some palace scenes were categorized into the church intersection and island classes, indicating that the classification capacity of FCIHMRT still requires improvement for similar scenes but in general, it can distinguish different scenes with rich spatial information.…”
Section: Results Using Nwpumentioning
confidence: 99%
See 1 more Smart Citation
“…Target dataset RESISC45 AID ResNet50+EAN (Zhao et al, 2020) 93.51 93.64 GLDBS (Xu et al, 2021) 94.46 95.45 PCNet (Zhang et al, 2021) 94.59 95.53 Million-AID (Long et al, 2022) 94.26 95.40 Domain-adaptive pre-training (ours)…”
Section: Methodsmentioning
confidence: 99%
“…In Table 10 the results on RESISC45 and AID obtained using DA pre-training are compared to recent state-of-the-art HRRS scene classification methods. Two of the methods, ResNet50+EAN (Zhao et al, 2020) and PCNet (Zhang et al, 2021) use the same ResNet-50 backbone as in our experiments, while GLDBS (Xu et al, 2021) uses ResNet-34. The best results obtained using pre-training on Million-AID used DenseNet-169 and ResNet-101 for classification of RESISC45 and AID, respectively.…”
Section: Feature Extractionmentioning
confidence: 99%