2023
DOI: 10.1109/tgrs.2023.3266811
|View full text |Cite
|
Sign up to set email alerts
|

Block Diagonal Representation Learning for Hyperspectral Band Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 51 publications
0
0
0
Order By: Relevance
“…The most commonly used methods for HSI dimensionality reduction to reduce spectral band redundancy include band selection (BS) [6,7] and feature extraction (FE) [8,9], where BS-based methods are based on the criteria of selecting the most representative band subset directly from the original HSI data without any transformation, and the obtained sub-bands are informative, distinguishable, and beneficial for subsequent tasks. Compared with FE, BS can reduce data dimensionality while preserving the physical meaning, inherent properties, and spectral characteristics of the original data, which is beneficial for interpreting the selected subset of bands in subsequent analysis and has been widely used in practical applications [10,11].…”
Section: Introductionmentioning
confidence: 99%
“…The most commonly used methods for HSI dimensionality reduction to reduce spectral band redundancy include band selection (BS) [6,7] and feature extraction (FE) [8,9], where BS-based methods are based on the criteria of selecting the most representative band subset directly from the original HSI data without any transformation, and the obtained sub-bands are informative, distinguishable, and beneficial for subsequent tasks. Compared with FE, BS can reduce data dimensionality while preserving the physical meaning, inherent properties, and spectral characteristics of the original data, which is beneficial for interpreting the selected subset of bands in subsequent analysis and has been widely used in practical applications [10,11].…”
Section: Introductionmentioning
confidence: 99%