We introduce the new concept of computation coding. For linear functions, we present an algorithm to reduce the computational cost of multiplying an arbitrary given matrix with an unknown vector. It decomposes the given matrix into the product of codebook and wiring matrices whose entries are either zero or signed integer powers of two.For a typical implementation of deep neural networks, the proposed algorithm reduces the number of required addition units several times. To achieve the accuracy of 16-bit signed integer arithmetic for 4k-vectors, no multipliers and only 1.5 adders per matrix entry are needed.
Here we introduce the new concept of computation coding. Similar to how rate-distortion theory is concerned with the lossy compression of data, computation coding deals with the lossy computation of functions. Particularizing to linear functions, we present an algorithmic approach to reduce the computational cost of multiplying a constant matrix with a variable vector, which requires neither a matrix nor vector having any particular structure or statistical properties. The algorithm decomposes the constant matrix into the product of codebook and wiring matrices whose entries are either zero or signed integer powers of two. For a typical application like the implementation of a deep neural network, the proposed algorithm reduces the number of required addition units several times. To achieve the accuracy of 16-bit signed integer arithmetic for 4k-vectors, no multipliers and only 1.5 adders per matrix entry are needed.
Most detection algorithms in spatial modulation (SM) are formulated as linear regression via the regularized least-squares (RLS) method. In this method, the transmit signal is estimated by minimizing the residual sum of squares penalized with some regularization. This paper studies the asymptotic performance of a generic RLS-based detection algorithm employed for recovery of SM signals. We derive analytically the asymptotic average mean squared error and the error rate for the class of bi-unitarily invariant channel matrices.The analytic results are employed to study the performance of SM detection via the box-LASSO. The analysis demonstrates that the performance characterization for i.i.d. Gaussian channel matrices is valid for matrices with non-Gaussian entries, as well. This justifies the partially approved conjecture given in [1]. The derivations further extend the former studies to scenarios with non-i.i.d. channel matrices. Numerical investigations validate the analysis, even for practical system dimensions.
We introduce the new concept of computation coding. Similar to how rate-distortion theory is concerned with the lossy compression of data, computation coding deals with the lossy computation of functions.Particularizing to linear functions, we present an algorithm to reduce the computational cost of multiplying an arbitrary given matrix with an unknown column vector. The algorithm decomposes the given matrix into the product of codebook and wiring matrices whose entries are either zero or signed integer powers of two.For a typical implementation of deep neural networks, the proposed algorithm reduces the number of required addition units several times. To achieve the accuracy of 16-bit signed integer arithmetic for 4k-vectors, no multipliers and only 1.5 adders per matrix entry are needed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.