2020
DOI: 10.1109/tgrs.2019.2940534
|View full text |Cite
|
Sign up to set email alerts
|

Mixed Noise Removal in Hyperspectral Image via Low-Fibered-Rank Regularization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
85
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 189 publications
(107 citation statements)
references
References 65 publications
0
85
0
Order By: Relevance
“…We perform the simulated and real experiments to illustrate the efficiency and performance of our NLBTD method to restore HSI in this section. There are six methods are used to compare the performance of remove noise, consist of the lowrank matrix factorization with TV [22] (denoted as LRTV), a method combining with HyRes and sparse noise removal technique [54] (denoted as HyMiNoR), three-directional tensor nuclear norm method [23] (denoted as 3DTNN), a subspacebased method [52] (denoted as L1HyMixDe), low-rank tucker decomposition with spatial-spectral TV [55] (denoted as LRT-DTV), and block terms decomposition with spatial-spectral TV [41] (denoted as LRTFL0). According to the author's suggestions, we tune the parameters of these state-of-the-art methods to achieve optimal performance.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We perform the simulated and real experiments to illustrate the efficiency and performance of our NLBTD method to restore HSI in this section. There are six methods are used to compare the performance of remove noise, consist of the lowrank matrix factorization with TV [22] (denoted as LRTV), a method combining with HyRes and sparse noise removal technique [54] (denoted as HyMiNoR), three-directional tensor nuclear norm method [23] (denoted as 3DTNN), a subspacebased method [52] (denoted as L1HyMixDe), low-rank tucker decomposition with spatial-spectral TV [55] (denoted as LRT-DTV), and block terms decomposition with spatial-spectral TV [41] (denoted as LRTFL0). According to the author's suggestions, we tune the parameters of these state-of-the-art methods to achieve optimal performance.…”
Section: Methodsmentioning
confidence: 99%
“…Furthermore, He et al [22] integrated a unified framework to simultaneously consider the spectral and spatial low-rankness, where the nuclear norm to explore the low-rank property and the total variation (TV) regularization to capture smooth information of HSI. However, the 3-D HSI will be reshaped into 2-D if using the matrix-based approaches, which destroys the spatial correlation [23][24][25][26][27].…”
Section: Introductionmentioning
confidence: 99%
“…Since there exists correlations in the practical datasets such as images and user ratings, the resulting tensor data is often low-rank. The low-rank property has been exploited in problems like low-rank tensor completion [25,26,27,28,29,30] and low-rank tensor recovery [31,32,33,34,35]. Leveraging the low-rank property, a convex relaxation of the Tucker rank can be applied to robust tensor recovery [31,32] and tensor completion [25,26].…”
Section: Introductionmentioning
confidence: 99%
“…Since there exist correlations in the practical datasets such as images and user ratings, the resulting tensor data is often low-rank. The low-rank property has been exploited in problems like low-rank tensor completion [25][26][27][28][29][30] and low-rank tensor recovery [31][32][33][34][35]. Leveraging the low-rank property, a convex relaxation of the Tucker rank can be applied to robust tensor recovery [31,32] and tensor completion [25,26].…”
Section: Introductionmentioning
confidence: 99%
“…To solve the unbalanced matricization scheme in the Tucker rank, tensor train rank is proposed to solve the tensor recovery and completion problem [29,33]. Some works also leverage another rank form called tubal multi-rank and its convex surrogate tensor nuclear norm as tools for tensor-related tasks [34,35]. This paper concerns only the recovery under the CP rank.…”
Section: Introductionmentioning
confidence: 99%