2021
DOI: 10.3390/rs13204098
|View full text |Cite
|
Sign up to set email alerts
|

Hyperspectral Image Mixed Noise Removal Using Subspace Representation and Deep CNN Image Prior

Abstract: The ever-increasing spectral resolution of hyperspectral images (HSIs) is often obtained at the cost of a decrease in the signal-to-noise ratio (SNR) of the measurements. The decreased SNR reduces the reliability of measured features or information extracted from HSIs, thus calling for effective denoising techniques. This work aims to estimate clean HSIs from observations corrupted by mixed noise (containing Gaussian noise, impulse noise, and dead-lines/stripes) by exploiting two main characteristics of hypers… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 52 publications
(101 reference statements)
0
7
0
Order By: Relevance
“…In addition, the runtime (seconds) is also considered as the index to compute complexity. Especially, five state-of-the-art methods are employed as the baselines to conduct performance comparison, including deep-learningbased HySuDeep (Subspace representation and deep CNN Image prior) [15], tensor-decomposition-based LRTDGS (Weighted group sparsity-regularized low-rank tensor decomposition) [24], matrix-factorization-based E-3DTV (Enhanced 3-D total variation) [36], SNLRSF (Subspace-based nonlocal low-Rank and sparse factorization method) [33] and F-LRNMF (Framelet-regularized low-rank nonnegative matrix factorization method) [34].…”
Section: Experimental Simulations and Performance Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, the runtime (seconds) is also considered as the index to compute complexity. Especially, five state-of-the-art methods are employed as the baselines to conduct performance comparison, including deep-learningbased HySuDeep (Subspace representation and deep CNN Image prior) [15], tensor-decomposition-based LRTDGS (Weighted group sparsity-regularized low-rank tensor decomposition) [24], matrix-factorization-based E-3DTV (Enhanced 3-D total variation) [36], SNLRSF (Subspace-based nonlocal low-Rank and sparse factorization method) [33] and F-LRNMF (Framelet-regularized low-rank nonnegative matrix factorization method) [34].…”
Section: Experimental Simulations and Performance Analysismentioning
confidence: 99%
“…In the aspect of supervised methods, Xie et al proposed a convolution neural network with trainable nonlinear functions for hyperspectral image restoration [14]. Zhuang et al [15] adopted a fast and flexible denoising convolutional neural network to capture high local correlation in spatial subspace. However, deep learning based methods often involve a great deal of data training, as well as hyperparameters, which will lead to an increase in computational burden [16], [17].…”
Section: Introductionmentioning
confidence: 99%
“…Compared with NGmeet, TenSRDe [29] can be viewed as a tensor version of such a NSR-based HSI denoising method. More related works refer to the articles presented in [24], [27], [38], and [39]. To the best of our knowledge, these NSR-related methods are nearly state-of-the-art and also very fast compared with the previous SR-unrelated methods.…”
Section: B Nsr-related Workmentioning
confidence: 99%
“…Almost all existing, either USL-or SSL-based, detection methods measure the abnormal level of each pixel using the reconstruction errors of the AE model. However, the reconstruction-based methods optimise the model by simply equivalent mapping between input and output data, which has the following problems: In the original spectral domain, the spectral information of HSI is affected by noise and spatial resolution to a certain extent [44,45], resulting in the insignificant separability of backgrounds and anomalies. Furthermore, the spectral dimension of HSI is higher.…”
Section: Introductionmentioning
confidence: 99%