2022
DOI: 10.1109/tip.2022.3176220
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Nonlinear Transform-Based Tensor Nuclear Norm for Multi-Dimensional Image Recovery

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(7 citation statements)
references
References 70 publications
0
7
0
Order By: Relevance
“…The sampling rates are set as {0.1, 0.2, 0.3}, and the standard deviation σ of Gaussian noise in the range of [0,1] is set as 0.1. The compared methods consist of seven model-based methods (i.e., TMac-TT [2], TNN [57], TRLRF [53], FTNN [15], TCTF [58], SN2TNN [23], and HLRTF [22]), two unsupervised deep learning-based methods (i.e., DIP2D [36] and DIP3D [36]), and two diffusion-based methods (i.e., DDRM [16] and DDNM [46]). HSI Denoising.…”
Section: Datasets and Compared Methodsmentioning
confidence: 99%
“…The sampling rates are set as {0.1, 0.2, 0.3}, and the standard deviation σ of Gaussian noise in the range of [0,1] is set as 0.1. The compared methods consist of seven model-based methods (i.e., TMac-TT [2], TNN [57], TRLRF [53], FTNN [15], TCTF [58], SN2TNN [23], and HLRTF [22]), two unsupervised deep learning-based methods (i.e., DIP2D [36] and DIP3D [36]), and two diffusion-based methods (i.e., DDRM [16] and DDNM [46]). HSI Denoising.…”
Section: Datasets and Compared Methodsmentioning
confidence: 99%
“…Besides that, the most similar works to TDNet are [47], [48], [49], but these works are mainly designed for hyperspectral image recovery tasks. In [47], the authors used deep generative networks for low-rank decomposition, and the method is robust to a wide range of distributions.…”
Section: Autoencoder Based Methodsmentioning
confidence: 99%
“…In [47], the authors used deep generative networks for low-rank decomposition, and the method is robust to a wide range of distributions. Luo et al [48] utilized a multilayer neural network to learn the nonlinear nature of multidimensional images, which is more effective than the linear transforms. Zhang et al [49] constructed the Kroneker bases by performing a maxpooling operation on the input features, and then the degraded hyperspectral image is recovered by a Conv layer with the spatial size of 3 × 3.…”
Section: Autoencoder Based Methodsmentioning
confidence: 99%
“…For instance, Li et al [20] proposed a composite transform that combines a linear transform with a nonlinear counterpart, applying it to LRTC. Additionally, Luo et al [27] employed nonlinear transforms derived from nonlinear multilayer neural networks to represent the low-rank tensor. However, the approaches based on t-SVD mentioned above lack flexibility in capturing correlations between different modes of tensors, thus hindering the achievement of optimal tensor completion performance.…”
Section: Introductionmentioning
confidence: 99%
“…This approach effectively captures correlations between different modes, enhancing the low-rank features of the transformed tensor. Besides, 3DSTNN embeds a self-supervised non-linear transformation (S2NT) [27], into TNN. S2NT comprises multiple linear transformations and non-linear activation functions, resembling a nonlinear multilayer neural network.…”
Section: Introductionmentioning
confidence: 99%