2020
DOI: 10.1016/j.neucom.2019.11.068
|View full text |Cite
|
Sign up to set email alerts
|

RADC-Net: A residual attention based convolution network for aerial scene classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 68 publications
(48 citation statements)
references
References 35 publications
0
48
0
Order By: Relevance
“…During the training reconstruction phase, Goh et al [130] used the mechanism of top-down attention in deep Boltzmann machines (DBMs) as a regularizing factor. Note that the network can be globally optimized using a top-down learning strategy in a similar manner, where the maps progressively output to the input throughout the learning process [129][130][131][132].…”
Section: Residual Attention Neural Networkmentioning
confidence: 99%
“…During the training reconstruction phase, Goh et al [130] used the mechanism of top-down attention in deep Boltzmann machines (DBMs) as a regularizing factor. Note that the network can be globally optimized using a top-down learning strategy in a similar manner, where the maps progressively output to the input throughout the learning process [129][130][131][132].…”
Section: Residual Attention Neural Networkmentioning
confidence: 99%
“…Meanwhile, in [17], texture information of images obtained using Local Binary Pattern (LBP) was used in addition to traditional RGB images for training deep learning models. On the contrary, in [8], a novel network architecture was proposed for aerial image classification, that consisted of dense blocks with addition and modification of residual attention layers and a classification layer. The network was designed to contain fewer parameters than the traditional models.…”
Section: Background and Motivationmentioning
confidence: 99%
“…Consequently, CNN models have been applied for aerial image classification also, leading to better classification performance compared to the prior approaches. Most [6,7,8]. The training phase involved in these approaches is computationally very expensive.…”
Section: Introductionmentioning
confidence: 99%
“…This model may have an advantage when the data set's scale increases. In [60], a self-attention- [10] 96.90 (0.77) AlexNet+SPP [17] 95.95 (1.01) CCP-net [18] 97.52 (0.97) AlexNet+MSCP [56] 97.29 (0.63) VGG-VD16+MSCP [56] 98.36 (0.58) BAFF [57] 95.48 (0.22) RADC-Net [58] 97.05 (0.48) D-CapsNet [54] 99.05 (0.12) DDRL-AM [59] 99.05 (0.08) VGG-VD16+CapsNet [55] 98.81 (0.12) AlexNet+SAFF [60] 96.13 ( 0.97) VGG-VD16+SAFF [60] 97.02 ( 0.78) [18], a concentric circle pooling layer is proposed to incorporate rotation-invariant spatial layout information of remote sensing scene images.…”
Section: E Compared With Other Pre-trained Cnn-based Methodsmentioning
confidence: 99%