2018
DOI: 10.3390/a11070094
|View full text |Cite
|
Sign up to set email alerts
|

Tensor Completion Based on Triple Tubal Nuclear Norm

Abstract: Many tasks in computer vision suffer from missing values in tensor data, i.e., multi-way data array. The recently proposed tensor tubal nuclear norm (TNN) has shown superiority in imputing missing values in 3D visual data, like color images and videos. However, by interpreting in a circulant way, TNN only exploits tube (often carrying temporal/channel information) redundancy in a circulant way while preserving the row and column (often carrying spatial information) relationship. In this paper, a new tensor nor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
9
1

Relationship

2
8

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 46 publications
(88 reference statements)
0
6
0
Order By: Relevance
“…(1) The orientational sensitivity of * L -SVD: Despite the promising empirical performance of the * L -SVD-based estimator, a typical defect of it is the orientation sensitivity owing to low-rankness strictly defined along the tubal orientation which makes it fail to simultaneously exploit transformed low-rankness in multiple orientations [19,58]. (2) The difficulty in finding the optimal transform L(•) for * L -SVD: Although a direct use of fixed transforms (like DFT and DCT) may produce fairish empirical performance, it is still unclear how to find the best optimal transformation L(•) for any certain tensor L * when only partial and corrupted observations are available.…”
Section: Discussionmentioning
confidence: 99%
“…(1) The orientational sensitivity of * L -SVD: Despite the promising empirical performance of the * L -SVD-based estimator, a typical defect of it is the orientation sensitivity owing to low-rankness strictly defined along the tubal orientation which makes it fail to simultaneously exploit transformed low-rankness in multiple orientations [19,58]. (2) The difficulty in finding the optimal transform L(•) for * L -SVD: Although a direct use of fixed transforms (like DFT and DCT) may produce fairish empirical performance, it is still unclear how to find the best optimal transformation L(•) for any certain tensor L * when only partial and corrupted observations are available.…”
Section: Discussionmentioning
confidence: 99%
“…It differs from SNN (Liu et al 2013) which only considers low Tucker rank. As special cases, if K = 3, OITNN-O degenerates to triple TNN (Wei et al 2018); If K = 3, (w 1 , w 3 ) → 0, then it approximates TNN.…”
Section: Lemma 1 It Holds For Any Tensormentioning
confidence: 99%
“…Other stateof-the-art methods, i.e. FBCP [22], TMac [7], TriTNN [2], HaLTRC [34], SiLRTC-TT [35], TNN [26], TRNNM [36], FFWTensor [14], are chosen as the baseline methods of the proposed method. The performance of these methods is obtained by tuning their parameters according to the corresponding papers.…”
Section: Analysis Of Space-complexity and Time-complexitymentioning
confidence: 99%