This paper has two contributions. First, we propose a novel coded matrix multiplication technique called Generalized PolyDot codes that advances on existing methods for coded matrix multiplication under storage and communication constraints. This technique uses "garbage alignment," i.e., aligning computations in coded computing that are not a part of the desired output. Generalized PolyDot codes bridge between the recent Polynomial codes and MatDot codes, trading off between recovery threshold and communication costs. Second, we demonstrate that Generalized PolyDot coding can be used for training large Deep Neural Networks (DNNs) on unreliable nodes that are prone to soft-errors, e.g., bit flips during computation that produce erroneous outputs. This requires us to address three additional challenges: (i) prohibitively large overhead of coding the weight matrices in each layer of the DNN at each iteration; (ii) nonlinear operations during training, which are incompatible with linear coding; and (iii) not assuming presence of an error-free master node, requiring us to architect a fully decentralized implementation. Because our strategy is completely decentralized, i.e., no assumptions on the presence of a single, error-free master node are made, we avoid any "single point of failure." We also allow all primary DNN training steps, namely, matrix multiplication, nonlinear activation, Hadamard product, and update steps as well as the encoding and decoding to be error-prone. We consider the case of mini-batch size B = 1, as well as B > 1; the first leverages coded matrix-vector products, and the second coded matrix-matrix products, respectively. The problem of DNN training under soft-errors also motivates an interesting, probabilistic error model under which a real number (P, Q) MDS code is shown to correct P − Q − 1 errors with probability 1 as compared to P −Q 2 for the more conventional, adversarial error model. We also demonstrate that our proposed coded DNN strategy can provide unbounded gains in error tolerance over a competing replication strategy and a preliminary MDS-code-based strategy [2] for both these error models. Lastly, as an example, we demonstrate an extension of our technique for a specific neural network architecture, namely, sparse autoencoders.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.