2021
DOI: 10.1007/s11063-021-10463-4
|View full text |Cite
|
Sign up to set email alerts
|

Compact Deep Color Features for Remote Sensing Scene Classification

Abstract: Aerial scene classification is a challenging problem in understanding high-resolution remote sensing images. Most recent aerial scene classification approaches are based on Convolutional Neural Networks (CNNs). These CNN models are trained on a large amount of labeled data and the de facto practice is to use RGB patches as input to the networks. However, the importance of color within the deep learning framework is yet to be investigated for aerial scene classification. In this work, we investigate the fusion … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 70 publications
0
10
0
Order By: Relevance
“…Also, when more stones types are included in the dataset, the proposed models might benefit from online or active learning techniques for adapting to new settings (for instance, kidney stones from poeple from countries with very different weather, an aspect that has not been studied so far). Furthermore, training deep learning models using images in other color spaces (as the or color spaces) is another promising area of research, as the obtained results can be more robust and smaller the deep-learning networks could be deployed, speeding up the inference time Anwer et al (2021).…”
Section: Discussionmentioning
confidence: 99%
“…Also, when more stones types are included in the dataset, the proposed models might benefit from online or active learning techniques for adapting to new settings (for instance, kidney stones from poeple from countries with very different weather, an aspect that has not been studied so far). Furthermore, training deep learning models using images in other color spaces (as the or color spaces) is another promising area of research, as the obtained results can be more robust and smaller the deep-learning networks could be deployed, speeding up the inference time Anwer et al (2021).…”
Section: Discussionmentioning
confidence: 99%
“…We show that same pixel stride (4) corresponding to the pixel width of the feature window (4×4), is better suited to the domain of remote sensing scene image classification. In this way, it allows the classifiers to -92.33±0.20 MDFR [56] 83.37±0.26 86.89±0.17 APDC-Net [57] 85.94±0.22 87.84±0.26 BoWK [22] -66.87±0.90 SFCNN [58] 89.89±0.16 92.55±0.14 Attention GANs [59] 86.11±0.22 89.44±0.18 MDFR [56] 83.37±0.26 86.89±0.17 CNN + GCN [15] 90.75±0.21 92.87±0.13 Color fusion [60] -87.50±0.00 Graph CNN [61] 91.39±0.19 93.62±0.28 AlexNet+SAFF [62] 80.05±0.29 84.00±0.17 VGG-VD16+SAFF [62] 84.38±0.19 87.86±0.14 IDCCP [63] 91.55±0.16 93.76±0.12 SEMSDNet [64] 91.68±0.39 93.89±0. consider more scales with minimal increase in overlapping or redundancy.…”
Section: Performance Comparison Of Different Pixel Stridesmentioning
confidence: 99%
“…11 (c) that all the classes are well separable which could potentially lead to better performance when training BiLSTM on remote sensing dataset. [20] -91.87±0.36 D-CNN [54] 90.82±0.16 96.89±0.10 MDFR [56] 90.62±0.27 93.37±0.29 APDC-Net [57] 88.56±0.29 92.15±0.29 SFCNN [58] 94.93±0.31 96.89±0.10 Attention GANs [59] 93.97±0.23 96.03±0.16 CNN + GCN [15] 94.93±0.31 96.89±0.10 Color fusion [60] -94.00±0.00 AlexNet+SAFF [62] 87.51±0.36 91.83±0.27 VGG-VD16+SAFF [62] Method Accuracy (Mean±std) AlexNet+sum pooling [65] 94.10±0.93 VGG-VD16+sum pooling [65] 91.67±1.40 SPP-Net [66] 96.67±0.94 GoogleNet [2] 94.31±0.89 VGG-VD16 [2] 95.21±1.20 DCA fusion [20] 96.90±0.77 MCNN [67] 96.66±0.90 D-CNN [54] 98.93±0.10 Triple networks [55] 97.99±0.53 VGG-VD16 +AlexNet [21] 98.81±0.38 Fusion by concatenation [68] 98.10±0.20 MDFR [56] 98.02±0.51 APDC-Net [57] 97.05±0.43 BoWK [22] 97.52±0.80 Attention GANs [59] 97.69±0.69 AlexNet+SAFF [62] 96.13±0.97 VGG-VD16+SAFF [62] 97.02±0.78 Color fusion [60] 98.10±0.00 Graph CNN [61] 99.00±0.43 IDCCP [63] 99.05±0.20 SEMSDNet [64] 99.41±0.14 PBDL+SVM (ours) 98.11±0.54 PBDL (The proposed) 99.57±0.36…”
Section: Visualization Of Feature Structuresmentioning
confidence: 99%
“…10 illustrates the confusion matrix produced by our proposed method (NBCL) with the 20% training ratio. Each Method Accuracy (Mean±std) AlexNet+sum pooling [2] 94.10±0.93 VGG-VD16+sum pooling [2] 91.67±1.40 SPP-Net [25] 96.67±0.94 GoogleNet [68] 94.31±0.89 VGG-VD16 [68] 95.21±1.20 DCA fusion [10] 96.90±0.77 MCNN [41] 96.66±0.90 D-CNN [16] 98.93±0.10 Triple networks [40] 97.99±0.53 VGG-VD16 +AlexNet [35] 98.81±0.38 Fusion by concatenation [45] 98.10±0.20 MDFR [77] 98.02±0.51 APDC-Net [5] 97.05±0.43 BoWK [46] 97.52±0.80 Attention GANs [76] 97.69±0.69 AlexNet+SAFF [9] 96.13±0.97 VGG-VD16+SAFF [9] 97.02±0.78 Color fusion [1] 98.10±0.00 Graph CNN [20] 99.00±0.43 IDCCP [65] 99.05±0.20 SEMSDNet [58] 99.41±0.14 NBCL (The proposed) 99.57±0.36…”
Section: Ablation Studymentioning
confidence: 99%
“…4) WHU-RS Dataset: Method Accuracy (Mean±std) Transferring CNNs (Case I) [27] 96.70±0.00 Transferring CNNs (Case II) [27] 98.60±0.00 Two-Step Categorisation [71] 93.70±0.57 CaffeNet [68] 94.80±0.00 GoogleNet [68] 92.90±0.00 VGG-VD16 [68] 95.10±0.00 MDDC [48] 98.27±0.53 salM 3 LBP-CLM [7] 96.38±0.76 AlexNet-SPP-SS [25] 95.00±1.12 VGG-VD19 [35] 98.16±0.77 DCA by addition [10] 98.70±0.22 MLF [34] 88.16±2.76 Fusion by concatenation [45] 99.17±0.20 D-DSML-CaffeNet [23] 96.64±0.68 BoWK [46] 99.47±0.60 Color fusion [1] 96.60±0.00 NBCL (The proposed) 99.63±0.42…”
Section: Ablation Studymentioning
confidence: 99%