2019
DOI: 10.1109/access.2019.2957163
|View full text |Cite
|
Sign up to set email alerts
|

Hyperspectral Image Classification With Pre-Activation Residual Attention Network

Abstract: Recently, convolutional neural networks (CNNs) have been introduced for hyperspectral image (HSI) classification and shown considerable classification performance. However, the previous CNNs designed for spectral-spatial HSI classification lay stress on the learning for the spatial correlation of HSI data and neglect the channel responses of feature maps. Furthermore, the lack of training samples remains the major challenge for CNN-based HSI classification methods to achieve better performance. To address the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(11 citation statements)
references
References 48 publications
(52 reference statements)
2
9
0
Order By: Relevance
“…In addition, for fair comparison, all experiments are performed in the same environment, and the same hyperparameters are set. Compare the proposed RCTC model with Alex-Net [24], Res-Net [21], Dense-Net [29], PRAN [8], AML [27] and SAGP [31]. Tables 2, 3 and 7 show the results of comparative experiments with different methods.…”
Section: Real-time Transmission Image Preprocessingmentioning
confidence: 99%
“…In addition, for fair comparison, all experiments are performed in the same environment, and the same hyperparameters are set. Compare the proposed RCTC model with Alex-Net [24], Res-Net [21], Dense-Net [29], PRAN [8], AML [27] and SAGP [31]. Tables 2, 3 and 7 show the results of comparative experiments with different methods.…”
Section: Real-time Transmission Image Preprocessingmentioning
confidence: 99%
“…For spatial domain, spatial features are regarded as a complementary to spectral ones, where the inner-spatial dependency were exploited to support spatial attention. Gao et al [53] added the attention mechanism into the pre-activation residual block. Sun et al [54] make a attention module that can be embedded anywhere in the spectral module and spatial module for HSIC.…”
Section: A Attention Mechanismmentioning
confidence: 99%
“…In general, the purpose of the attention mechanism is to strengthen important information and weaken unimportant information by certain methods. Inspired by the principle of SENet [36], we used the bottom-up top-down structure to construct the attention mechanism based on CNN. Our proposed channel-wise and spatial-wise attention modules have the same architecture, but the implementation methods of up-sampling are different.…”
Section: Proposed Attention Mechanismmentioning
confidence: 99%
“…This attention module, which only focuses on spatial information, cannot globally optimize feature maps processed by convolutional layers. The authors of [36] adopted the squeeze-and-excitation network (SENet) [37] to adaptively recalibrate channel feature responses by explicitly modelling interdependencies between channels. In addition, [38] also constructed a spatial-spectral squeeze-and-excitation (SSSE) module based on SENet.…”
Section: Introductionmentioning
confidence: 99%