2020
DOI: 10.1109/jstars.2020.3011333
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Neural Network Combined CNN and GCN for Remote Sensing Scene Classification

Abstract: Learning powerful discriminative features is the key for remote sensing scene classification. Most existing approaches based on convolutional neural network (CNN) have achieved great results. However, they mainly focus on global-based visual features while ignoring object-based location features, which is important for large-scale scene classification. There are a large number of scene-related ground objects in remote sensing images, as well as Graph convolutional network (GCN) has the potential to capture the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
64
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 100 publications
(65 citation statements)
references
References 51 publications
1
64
0
Order By: Relevance
“…The CM of 50% of training images is shown in Figure 3 to analyse the results in detail. All the categories are predicted correctly, and only the classes of 'aeroplane','building', 'harbour', and 'sparseresidential' have some difficult samples is compared with MG-CAP [36], RTN [42], DCNN [37], MSCP [38], Combined CNN and GCN [39], MSITL [40], ADFF [41], and MSDFF [35]. Among all SOTAs, the proposed ERANet has achieved the best score with 20% and 10% training set.…”
Section: Resultsmentioning
confidence: 99%
“…The CM of 50% of training images is shown in Figure 3 to analyse the results in detail. All the categories are predicted correctly, and only the classes of 'aeroplane','building', 'harbour', and 'sparseresidential' have some difficult samples is compared with MG-CAP [36], RTN [42], DCNN [37], MSCP [38], Combined CNN and GCN [39], MSITL [40], ADFF [41], and MSDFF [35]. Among all SOTAs, the proposed ERANet has achieved the best score with 20% and 10% training set.…”
Section: Resultsmentioning
confidence: 99%
“…Due to their convincing performance and high interpretability, GNNs have recently been widely applied in a wide range of problem domains across scene graph generation, scene classification (Gao, Shi, & Li et al, 2021;Liang, Deng, & Zeng, 2020), point clouds segmentation/classification (Wen, Li, Yao, Peng, & Chi, 2021;Widyaningrum, Bai, Fajari, & Lindenbergh, 2021), text classification, traffic forecasting (Peng, Wang, Du, Bhuiyan, Ma, Liu & Yu, 2020), and event detection (Guo et al, 2019). However, applications of GNNs in RS are just beginning.…”
Section: Graph Neural Networkmentioning
confidence: 99%
“…The authors utilized the complementarity between local and global features by exploiting the features of different scales and discarding the redundant information in features. Liang et al [37] introduced a novel two-stream architecture combining global-based visual features and object-based features. The model first extracts the appearance visual features from the scene image using convolutional neural network and later detects the ground objects and finally constructs a graph to learn spatial features using a graph convolutional network.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, using a CNN as a "black box" to classify remote sensing images may be not good enough for complex scenes. Even though several works [44,6] attempted to focus on the critical local image patches and discard the useless information, they still only utilize the visual information [37].…”
mentioning
confidence: 99%