2022
DOI: 10.1109/tgrs.2021.3057689
|View full text |Cite
|
Sign up to set email alerts
|

Dual-Channel Residual Network for Hyperspectral Image Classification With Noisy Labels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
30
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 43 publications
(31 citation statements)
references
References 28 publications
0
30
0
1
Order By: Relevance
“…(2) PSPNet-RL, which employed a robust loss proposed in [54] to alleviate the impact of noisy labels; (3) PSPNet trained by a training set collected outside the current data (we called it PSPNet-TF in this study); (4) the traditional PSPNet. The above-mentioned algorithms and the proposed MTNet were performed on the training set introduced in Section 3.3.2.…”
Section: Qualitative Results For Typical Regionsmentioning
confidence: 99%
See 3 more Smart Citations
“…(2) PSPNet-RL, which employed a robust loss proposed in [54] to alleviate the impact of noisy labels; (3) PSPNet trained by a training set collected outside the current data (we called it PSPNet-TF in this study); (4) the traditional PSPNet. The above-mentioned algorithms and the proposed MTNet were performed on the training set introduced in Section 3.3.2.…”
Section: Qualitative Results For Typical Regionsmentioning
confidence: 99%
“…(2) Generally, PSPNet-RL obtained better accuracies than the PSPNet, due to the introduced robust loss function. This means the robust loss proposed in [54] does have the ability of improving the robustness to noisy labels. However, as shown in Figures 6-8, this improvement in the classification accuracy is at the cost of decreasing the recognition ability of complex features.…”
Section: Quantitative Results For Typical Regionmentioning
confidence: 99%
See 2 more Smart Citations
“…For instance, Yang et al [41] implemented a deep CNN with two-branch architecture for HSI classification, in which low and mid-layers are pretrained on other data sources, with a twolayer MLP performing the final classification. Xu et al [42] proposed a novel dual-channel residual network for classifying HSI with noisy labels, which employs a noise-robust loss function to enhance model robustness and utilizes a single layer MLP for classification. To overcome this drawback, we adopt a weight sharing strategy in the proposed MLP-based architecture, which can lead to significant memory savings and will be detailed in the following Section.…”
Section: Input Layer Hidden Layers Output Layermentioning
confidence: 99%