2020
DOI: 10.1109/jstars.2020.3002787
|View full text |Cite
|
Sign up to set email alerts
|

Deep Induction Network for Small Samples Classification of Hyperspectral Images

Abstract: Recently, the deep learning models have achieved great success in hyperspectral images classification. However, most of the deep learning models fail to obtain satisfactory results under the condition of small samples due to the contradiction between the large parameter space of the deep learning models and the insufficient labeled samples in hyperspectral images. To address the problem, a deep model based on the induction network is designed in this paper to improve the classification performance of hyperspec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 34 publications
(25 citation statements)
references
References 50 publications
0
25
0
Order By: Relevance
“…For this reason, how to improve the classification accuracy of hyperspectral images under the conditions of small samples has become a research hotspot. Therefore, many existing technologies were used to address this problem, such as semi-supervised learning (Fang et al, 2018), active learning (Fang et al, 2018), unsupervised feature extraction (Mei et al, 2019), metal learning (Gao et al, 2020) and so on.…”
Section: Introductionmentioning
confidence: 99%
“…For this reason, how to improve the classification accuracy of hyperspectral images under the conditions of small samples has become a research hotspot. Therefore, many existing technologies were used to address this problem, such as semi-supervised learning (Fang et al, 2018), active learning (Fang et al, 2018), unsupervised feature extraction (Mei et al, 2019), metal learning (Gao et al, 2020) and so on.…”
Section: Introductionmentioning
confidence: 99%
“…where D ∈ R r×c×P is the tensor matrix composed by P replications of the mean matrix of M in the third dimension, F (1) represents feature in the first layer.…”
Section: A Spectral-spatial Random Patches Network Modelmentioning
confidence: 99%
“…Deep feature extraction of the SSRPNet. After F (1) passes through the second layer of the SSRPNet, the feature F (2) can be obtained. In this way, we can get the features of any layer.…”
Section: A Spectral-spatial Random Patches Network Modelmentioning
confidence: 99%
See 2 more Smart Citations