Abstract.Recently the problem of determining the best, in the least-squares sense, rank-1 approximation to a higher-order tensor was studied and an iterative method that extends the wellknown power method for matrices was proposed for its solution. This higher-order power method is also proposed for the special but important class of supersymmetric tensors, with no change. A simplified version, adapted to the special structure of the supersymmetric problem, is deemed unreliable, as its convergence is not guaranteed. The aim of this paper is to show that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications. The use of this version entails significant savings in computational complexity as compared to the unconstrained higher-order power method. Furthermore, a novel method for initializing the iterative process is developed which has been observed to yield an estimate that lies closer to the global optimum than the initialization suggested before. Moreover, its proximity to the global optimum is a priori quantifiable. In the course of the analysis, some important properties that the supersymmetry of a tensor implies for its square matrix unfolding are also studied.Key words. supersymmetric tensors, rank-1 approximation, higher-order power method, higherorder singular value decomposition
AMS subject classifications. 15A18, 15A57, 15A69PII. S0895479801387413
Introduction.A tensor of order N is an N -way array, i.e., its entries are accessed via N indices.1 For example, a scalar is a tensor of order 0, a vector is a tensor of order 1, and a matrix is a second-order tensor. Tensors find applications in such diverse fields as physics, signal processing, data analysis, chemometrics, and psychology [4].The notion of rank can also be defined for tensors of order higher than 2. The way this is done is via an extension of the well-known expansion of a matrix in a sum of rank-1 terms. Thus, the rank, R, of an N th-order tensor T is the minimum number of rank-1 tensors that sum up to T . A rank-1 tensor of order N is given by the generalized outer product of N vectors, u (i) , i = 1, 2, . . . , N, i.e., its (i 1 , i 2 , . . . , i N ) entry