2022
DOI: 10.1109/jstars.2022.3171586
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Dense Network With Attention Mechanism for Hyperspectral Image Classification

Abstract: The nonlinear relation between the spectral information and the corresponding objects (complex physiognomies) makes pixel-wise classification challenging for conventional methods. To deal with nonlinearity issues in Hyperspectral Image Classification (HSIC), Convolutional Neural Networks (CNN) are more suitable, indeed. However, fixed kernel sizes make traditional CNN too specific, neither flexible nor conducive to feature learning, thus impacting on the classification accuracy. The convolution of different ke… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(5 citation statements)
references
References 56 publications
0
5
0
Order By: Relevance
“…The activation maps for the spatial-spectral position (x, y, z) at the i-th feature map and j-th layer can be denoted as v (x,y,z) i,j in which d i−1 represents the total number of feature maps at the (i − 1)-th layer, w i,j and b i,j denote the depth of the kernel and bias, respectively. Additionally, 2γ + 1, 2δ + 1, and 2ν + 1 correspond to the height, width, and depth of the kernel [19], [63]. The Swin Transformer (ST) excels in constructing multiscale feature maps by iteratively fusing neighboring patches using the window partition mechanism.…”
Section: Proposed Methodologymentioning
confidence: 99%
“…The activation maps for the spatial-spectral position (x, y, z) at the i-th feature map and j-th layer can be denoted as v (x,y,z) i,j in which d i−1 represents the total number of feature maps at the (i − 1)-th layer, w i,j and b i,j denote the depth of the kernel and bias, respectively. Additionally, 2γ + 1, 2δ + 1, and 2ν + 1 correspond to the height, width, and depth of the kernel [19], [63]. The Swin Transformer (ST) excels in constructing multiscale feature maps by iteratively fusing neighboring patches using the window partition mechanism.…”
Section: Proposed Methodologymentioning
confidence: 99%
“…A fast and compact 3dimensional CNN model has been proposed in [52] which significantly reduces the computational cost and improves the experimental results for several Hyperspectral datasets. In this hierarchy, the works [37], [38], [53], [54] proposed Hybrid 3-dimensional followed by 2-dimensional CNN layers for a better spatial-spectral feature hierarchy for end classification.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Deep learning methods integrate feature extraction and classification into an end-to-end workflow and extract features automatically [11], thus dominating HSI classification. As a popular deep learning method, convolutional neural networks (CNNs) have been introduced into the HSI classification and demonstrated advanced performance [12][13][14]. However, the numerous spectral bands and complex crossband structure in HSI induce high-performance CNNs to become deeper and wider [15].…”
Section: Introductionmentioning
confidence: 99%