2021
DOI: 10.1109/jstars.2021.3121334
|View full text |Cite
|
Sign up to set email alerts
|

Patch-Free Bilateral Network for Hyperspectral Image Classification Using Limited Samples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 63 publications
0
2
0
Order By: Relevance
“…The vast majority of methods used for HIC-SS are built on CNN [24][25][26][27][28][29][30][31][32][33]. Among these CNN-based approaches, some focus on extracting features with better differentiability from a limited samples [24,[34][35][36], while others focus on optimizing the training process of classifiers [37]. The learning paradigms used cover a variety of models including self-supervised learning [38], transfer learning [39], active learning [40,41], meta-learning [37,42,43], and so on.…”
Section: Introductionmentioning
confidence: 99%
“…The vast majority of methods used for HIC-SS are built on CNN [24][25][26][27][28][29][30][31][32][33]. Among these CNN-based approaches, some focus on extracting features with better differentiability from a limited samples [24,[34][35][36], while others focus on optimizing the training process of classifiers [37]. The learning paradigms used cover a variety of models including self-supervised learning [38], transfer learning [39], active learning [40,41], meta-learning [37,42,43], and so on.…”
Section: Introductionmentioning
confidence: 99%
“…Hyperspectral image (HSIs) classification is widely used in earth observation systems [1,2]. However, the sparsity of labeled samples has long restricted the development of HSI classification technologies and results in that only a few labeled samples can be used for training models [3,4]. In this case, HSI classification directly using classifiers such as nearest neighbor, random forest (RF) and support vector machine (SVM) often could not obtain high-precision results [5].…”
Section: Introductionmentioning
confidence: 99%