2016
DOI: 10.1109/tnnls.2015.2496858
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Higher Order Orthogonal Iteration for Tensor Learning and Decomposition

Abstract: Low-rank tensor completion (LRTC) has successfully been applied to a wide range of real-world problems. Despite the broad, successful applications, existing LRTC methods may become very slow or even not applicable for large-scale problems. To address this issue, a novel core tensor trace-norm minimization (CTNM) method is proposed for simultaneous tensor learning and decomposition, and has a much lower computational complexity. In our solution, first, the equivalence relation of trace norm of a low-rank tensor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
72
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 88 publications
(72 citation statements)
references
References 43 publications
0
72
0
Order By: Relevance
“…Another line of research focuses on the convex relaxation of the low rank tensor decomposition and completion problem. For example, the convex relaxation is achieved by a generalized trace norm (Romera‐Paredes and Pontil, ), a tensor Schatten 1‐norm (Liu et al ., ; Gu et al ., ) or a novel tensor nuclear norm (Yuan and Zhang, ). Kolda and Bader (), Plantenga et al .…”
Section: Introductionmentioning
confidence: 97%
“…Another line of research focuses on the convex relaxation of the low rank tensor decomposition and completion problem. For example, the convex relaxation is achieved by a generalized trace norm (Romera‐Paredes and Pontil, ), a tensor Schatten 1‐norm (Liu et al ., ; Gu et al ., ) or a novel tensor nuclear norm (Yuan and Zhang, ). Kolda and Bader (), Plantenga et al .…”
Section: Introductionmentioning
confidence: 97%
“…Since then, HOSVD and HOOI have been widely studied in the literature (see, e.g. [30,31,32,33,34]). However as far as we know, many basic theoretical properties of these procedures, such as the error bound and the necessary iteration times, still remain unclear.…”
Section: Introductionmentioning
confidence: 99%
“…Schatten norm, or trace norm), which is defined as the sum of singular values of a matrix and it is the most popular convex surrogate for rank regularization. Based on differ-ent definitions of tensor rank, various nuclear norm regularized algorithms have been proposed (Liu et al 2013;Imaizumi, Maehara, and Hayashi 2017;Liu et al 2014;Liu et al 2015). Rank minimization based methods do not need to specify the rank of the employed tensor decompositions beforehand, and the rank of the recovered tensor will be automatically learned from the limited observations.…”
Section: Introductionmentioning
confidence: 99%