2016
DOI: 10.1007/s10589-016-9846-9
|View full text |Cite
|
Sign up to set email alerts
|

An adaptive gradient method for computing generalized tensor eigenpairs

Abstract: High order tensor arises more and more often in signal processing, data analysis, higher-order statistics, as well as imaging sciences. In this paper, an adaptive gradient (AG) method is presented for generalized tensor eigenpairs. Global convergence and linear convergence rate are established under some suitable conditions. Numerical results are reported to illustrate the efficiency of the proposed method. Comparing with the GEAP method, an adaptive shifted power method proposed by Tamara G. Kolda and Jackson… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
12
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(12 citation statements)
references
References 53 publications
0
12
0
Order By: Relevance
“…Some of them are with the purpose of finding one Z-eigenpair. Kolda and Mayo [13] presented a shifted power method for computing real symmetric tensors' Z-eigenpairs with linear convergence rate, and then, extended it to the generalized tensor eigenvalue problems in [14]; Yu et al [24] proposed a linearly convergent adaptive gradient method; Guo et al [6] developed a modified local Newton iteration for computing nonnegative Z-eigenpairs of nonnegative tensors, which enjoys locally quadratic convergence property under proper assumptions; Zhang et al [25] introduced a Newton method for newly defined almost nonnegative irreducible tensors. The other works are interested in finding the extreme Z-eigenvalues.…”
Section: Introductionmentioning
confidence: 99%
“…Some of them are with the purpose of finding one Z-eigenpair. Kolda and Mayo [13] presented a shifted power method for computing real symmetric tensors' Z-eigenpairs with linear convergence rate, and then, extended it to the generalized tensor eigenvalue problems in [14]; Yu et al [24] proposed a linearly convergent adaptive gradient method; Guo et al [6] developed a modified local Newton iteration for computing nonnegative Z-eigenpairs of nonnegative tensors, which enjoys locally quadratic convergence property under proper assumptions; Zhang et al [25] introduced a Newton method for newly defined almost nonnegative irreducible tensors. The other works are interested in finding the extreme Z-eigenvalues.…”
Section: Introductionmentioning
confidence: 99%
“…Lots of study have been conducted regarding properties of tensors such as tensor eigenvalues [4,5,6], the best rank-one approximation [7,8,9], tensor rank [3,10], tensor and symmetric tensor decomposition [11,12,13], symmetric tensor [14,15], nonnegative tensor [16], copositive tensors [17], and completely positive tensor [18,19,20]. Many tensor computation methods are also proposed including tensor eigenvalue computation [21,22,23,24,25,26], tensor system solution [27,28,29,30,31,32,33], and tensor decomposition [34].…”
Section: Introductionmentioning
confidence: 99%
“…Chen et al [13] studied the generalized tensor eigenvalue problem via homotopy methods. An adaptive gradient method for computing generalized tensor eigenpairs has been developed in [14]. Fu et al [15] derived new algorithms to compute best rank-one approximation of conjugate partial-symmetric (CPS) tensors by unfolding CPS tensors to Hermitian matrices.…”
Section: Introductionmentioning
confidence: 99%