Higher-order tensors and their decompositions are abundantly present in domains such as signal processing (e.g. higher-order statistics [1], sensor array processing [2]), scientific computing (e.g. discretized multivariate functions [3]- [6]) and quantum information theory (e.g. representation of quantum many-body states [7]). In many applications the, possibly huge, tensors can be approximated well by compact multilinear models or decompositions. Tensor decompositions are more versatile tools than the linear models resulting from traditional matrix approaches.Compared to matrices, tensors have at least one extra dimension. The number of elements in a tensor increases exponentially with the number of dimensions, and so do the computational and memory requirements. The exponential dependency (and the problems that are caused by it) is called the curse of dimensionality. The curse limits the order of the tensors that can be handled.Even for modest order, tensor problems are often large-scale. Large tensors can be handled, and the curse can be alleviated or even removed, by using a decomposition that represents the tensor, instead of the tensor itself. However, most decomposition algorithms require full tensors, which renders these algorithms infeasible for large datasets. If a tensor can be represented by a decomposition, this hypothesized structure can be exploited by using compressed sensing type methods working on incomplete tensors, i.e. tensors with only a few known elements.In domains such as scientific computing and quantum information theory, tensor decompositions such as the Tucker decomposition and tensor trains have successfully been applied to represent large tensors. In the latter case, the tensor can contain more elements than the number of atoms in the universe [8]