Big data analysis has become a crucial part of new emerging technologies such as the internet of things, cyber-physical analysis, deep learning, anomaly detection, etc. Among many other techniques, dimensionality reduction plays a key role in such analyses and facilitates feature selection and feature extraction. Randomized algorithms are efficient tools for handling big data tensors. They accelerate decomposing large-scale data tensors by reducing the computational complexity of deterministic algorithms and the communication among different levels of memory hierarchy, which is the main bottleneck in modern computing environments and architectures. In this paper, we review recent advances in randomization for computation of Tucker decomposition and Higher Order SVD (HOSVD). We discuss random projection and sampling approaches, single-pass and multi-pass randomized algorithms and how to utilize them in the computation of the Tucker decomposition and the HOSVD. Simulations on synthetic and real datasets are provided to compare the performance of some of best and most promising algorithms.
This work deals with developing two fast randomized algorithms for computing the generalized tensor singular value decomposition (GTSVD) based on the tubal product (t-product). The random projection method is utilized to compute the important actions of the underlying data tensors and use them to get small sketches of the original data tensors, which are easier to handle. Due to the small size of the sketch tensors, deterministic approaches are applied to them to compute their GTSVDs. Then, from the GTSVD of the small sketch tensors, the GTSVD of the original large-scale data tensors is recovered. Some experiments are conducted to show the effectiveness of the proposed approaches.
The Canonical Polyadic decomposition (CPD) is a convenient and intuitive tool for tensor factorization; however, for higher-order tensors, it often exhibits high computational cost and permutation of tensor entries, these undesirable effects grow exponentially with the tensor order. Prior compression of tensor in-hand can reduce the computational cost of CPD, but this is only applicable when the rank R of the decomposition does not exceed the tensor dimensions. To resolve these issues, we present a novel method for CPD of higher-order tensors, which rests upon a simple tensor network of representative interconnected core tensors of orders not higher than 3. For rigour, we develop an exact conversion scheme from the core tensors to the factor matrices in CPD, and an iterative algorithm with low complexity to estimate these factor matrices for the inexact case. Comprehensive simulations over a variety of scenarios support the approach.
We extend ordinary Padé approximation, which is based on a set of standard polynomials as {1, ,. .. , }, to a new extended Padé approximation (Müntz Padé approximation), based on the general basic function set {1, , 2 ,. .. , } (0 < ≤ 1) (the particular case of Müntz polynomials) using general Taylor series (based on fractional calculus) with error convergency. The importance of this extension is that the ordinary Padé approximation is a particular case of our extended Padé approximation. Also the parameterization (is the corresponding parameter) of new extended Padé approximation is an important subject which, obtaining the optimal value of this parameter, can be a good question for a new research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.