2023
DOI: 10.1109/tgrs.2023.3272639
|View full text |Cite
|
Sign up to set email alerts
|

AAtt-CNN: Automatic Attention-Based Convolutional Neural Networks for Hyperspectral Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 27 publications
(7 citation statements)
references
References 82 publications
0
7
0
Order By: Relevance
“…Involving recalibration of the spectrum features. Numerous scholars have successfully integrated attention mechanisms into spectral similarity measurements, effectively aggregating similar spectra and optimize cognitive performance of convolutional neural networks [44], [45], [46], [47].…”
Section: A Prerequisitementioning
confidence: 99%
“…Involving recalibration of the spectrum features. Numerous scholars have successfully integrated attention mechanisms into spectral similarity measurements, effectively aggregating similar spectra and optimize cognitive performance of convolutional neural networks [44], [45], [46], [47].…”
Section: A Prerequisitementioning
confidence: 99%
“…The depth of a neural network will affect its capacity. The term deep learning (or deep neural networks) refers to neural networks with at least one hidden layer [19].…”
Section: Neural Networkmentioning
confidence: 99%
“…Furthermore, more complex and better performance methods also exist to optimise the weights, such as the Levenberg-Marquardt algorithm [32]. At present, deep networks and their ensembles are the most advanced solutions for the majority of typical applications, including computer vision, speech processing, and image processing [14][15][16][17][18][19]. However, such solutions rely on "heavy" models and a large number of parameters, which can potentially be computationally expensive and hamper their use in real-time systems.…”
Section: Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Zhao et al [37] designed channel and spatial residual attention modules to integrate the multi-scale features by introducing a residual connection and attention mechanism. Paoletti et al [38] employed channel attention to automatically design a network to extract spatial and spectral features. Shi et al [39] proposed the 3D-OCONV method and abstracted the spectral features by combining 3D-OCONV and spectral attention.…”
Section: Attention Mechanismmentioning
confidence: 99%