2020
DOI: 10.1109/access.2019.2963769
|View full text |Cite
|
Sign up to set email alerts
|

Scene Classification of Remote Sensing Images Based on Saliency Dual Attention Residual Network

Abstract: Scene classification of high-resolution Remote Sensing Images (RSI) is one of basic challenges in RSI interpretation. Existing scene classification methods based on deep learning have achieved impressive performances. However, since RSI commonly contain various types of ground objects and complex backgrounds, most of methods cannot focus on saliency features of scene, which limits the classification performances. To address this issue, we propose a novel Saliency Dual Attention Residual Network (SDAResNet) to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 34 publications
(25 citation statements)
references
References 36 publications
0
25
0
Order By: Relevance
“…Another work in [43] propose a novel Saliency Dual Attention Residual Network (SDAResNet) to extract both cross-channel and spatial saliency information for scene classification of RSI. More specifically, spatial attention is embedded in low-level feature to emphasize saliency location information and suppress background information, and channel attention is integrated to high-level features to extract saliency meaningful information.…”
Section: Figurementioning
confidence: 99%
“…Another work in [43] propose a novel Saliency Dual Attention Residual Network (SDAResNet) to extract both cross-channel and spatial saliency information for scene classification of RSI. More specifically, spatial attention is embedded in low-level feature to emphasize saliency location information and suppress background information, and channel attention is integrated to high-level features to extract saliency meaningful information.…”
Section: Figurementioning
confidence: 99%
“…Also, the top(4, 3, 2, 1) denote the case when using only the fourth level of PyConv for the first D block (9 × 9 kernel), the third level for the 2nd D block (7 × 7 kernel), second level for the 3rd D block (5 × 5 kernel) and first level for the 4th D block (3 × 3 kernel). For levels (5,4,3,2), the fifth level has an 11 × 11 kernel.…”
Section: Ablation Studymentioning
confidence: 99%
“…The top(4, 3, 2, 1) also yield a significant performance improvement compared to the baseline, indicating that increasing the filter sizes when building PyConv is beneficial. Besides, we also experimented with levels (5,4,3,2), in which more levels are added to PyConv. In this case, the overall accuracy is lower than levels(4, 3, 2, 1), indicating that more levels in PyConv do not necessarily yield better performance.…”
Section: Ablation Studymentioning
confidence: 99%
See 2 more Smart Citations