2022
DOI: 10.3390/rs14132997
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Assisted Semi-Supervised Residual Network for Hyperspectral Image Classification

Abstract: Due to the scarcity and high cost of labeled hyperspectral image (HSI) samples, many deep learning methods driven by massive data cannot achieve the intended expectations. Semi-supervised and self-supervised algorithms have advantages in coping with this phenomenon. This paper primarily concentrates on applying self-supervised strategies to make strides in semi-supervised HSI classification. Notably, we design an effective and a unified self-supervised assisted semi-supervised residual network (SSRNet) framewo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(11 citation statements)
references
References 40 publications
1
10
0
Order By: Relevance
“…Most self-supervised pretext tasks designed a priori may face ambiguity problems; for example, in rotation angle prediction where some objects do not have a usual orientation [49]. Given the advantages of self-supervised learning, a number of methods have been proposed to use self-supervised learning for HSI classification [50][51][52]. The existing methods verify the feasibility of self-supervised learning in the field of HSI classification.…”
Section: Methodsmentioning
confidence: 99%
“…Most self-supervised pretext tasks designed a priori may face ambiguity problems; for example, in rotation angle prediction where some objects do not have a usual orientation [49]. Given the advantages of self-supervised learning, a number of methods have been proposed to use self-supervised learning for HSI classification [50][51][52]. The existing methods verify the feasibility of self-supervised learning in the field of HSI classification.…”
Section: Methodsmentioning
confidence: 99%
“…Their feature learning and classification was based on an ensemble learning strategy which jointly utilised spatial context information at different scales and feature information in different bands. In Song et al [46], a selfsupervised branch appeared in a semi-supervised residual network (SSRNet). Discriminative features were learned by performing two tasks: masked bands reconstruction and spectral order forecast.…”
Section: Related Workmentioning
confidence: 99%
“…However, we only found four papers dealing with self-supervised learning in RS using a small sample (Bing Rangnekar et al, 2020;Song et al, 2022;Zhixiang Xue et al, 2022). These were related to solving the HSI classification problem.…”
Section: Self-supervised Learningmentioning
confidence: 99%