This survey gives a comprehensive overview of tensor techniques and applications in machine learning. Tensor represents higher order statistics. Nowadays, many applications based on machine learning algorithms require a large amount of structured high-dimensional input data. As the set of data increases, the complexity of these algorithms increases exponentially with the increase of vector size. Some scientists found that using tensors instead of the original input vectors can effectively solve these high-dimensional problems. This survey introduces the basic knowledge of tensor, including tensor operations, tensor decomposition, some tensor-based algorithms, and some applications of tensor in machine learning and deep learning for those who are interested in learning tensors. The tensor decomposition is highlighted because it can effectively extract structural features of data and many algorithms and applications are based on tensor decomposition. The organizational framework of this paper is as follows. In part one, we introduce some tensor basic operations, including tensor decomposition. In part two, applications of tensor in machine learning and deep learning, including regression, supervised classification, data preprocessing, and unsupervised classification based on low rank tensor approximation algorithms are introduced detailly. Finally, we briefly discuss urgent challenges, opportunities and prospects for tensor.
Deep neural network (DNN) shows its powerful performance in terms of image classification and many other applications. However, as the number of network layers increases, it brings huge pressure on devices with limited resources. In this article, a novel network compression algorithm is proposed that compresses the original network by up to about 60 times. In particular, a tensor Canonical Polyadic(CP) decomposition based algorithm is proposed to compress the weight matrix in the fully connected(FC) layer and the convolution kernel in the convolution layer. Traditional tensor decomposition algorithms are usually to first pre‐train the weights, and decompose the weights, finally perform fine‐tuning on the factors in the second training phase. Instead, the decomposed factors are directly updated by performing tensor CP decomposition on weight without fine‐tuning. The proposed algorithm is called Fast CP‐Compression Layer method in this paper. Experiments show that the proposed algorithm cannot only reduce computing time and improve compression factor but also improve accuracy on some datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.