Federated learning enables training a global model from data located at the client nodes, without data sharing and moving client data to a centralized server. Performance of federated learning in a multi-access edge computing (MEC) network suffers from slow convergence due to heterogeneity and stochastic fluctuations in compute power and communication link qualities across clients. We propose a novel coded computing framework, CodedFedL, that injects structured coding redundancy into federated learning for mitigating stragglers and speeding up the training procedure. CodedFedL enables coded computing for non-linear federated learning by efficiently exploiting distributed kernel embedding via random Fourier features that transforms the training task into computationally favourable distributed linear regression. Furthermore, clients generate local parity datasets by coding over their local datasets, while the server combines them to obtain the global parity dataset. Gradient from the global parity dataset compensates for straggling gradients during training, and thereby speeds up convergence. For minimizing the epoch deadline time at the MEC server, we provide a tractable approach for finding the amount of coding redundancy and the number of local data points that a client processes during training, by exploiting the statistical properties of compute as well as communication delays. We also characterize the leakage in data privacy when clients share their local parity datasets with the server. Additionally, we analyze the convergence rate and iteration complexity of CodedFedL under simplifying assumptions, by treating CodedFedL as a stochastic gradient descent algorithm. Finally, for demonstrating gains that CodedFedL can achieve in practice, we conduct numerical experiments using practical network parameters and benchmark datasets, in which CodedFedL speeds up the overall training time by up to 15× in comparison to the benchmark schemes.
A new efficient parametric algorithm for implementing the low density lattice codes belief propagation decoder is presented. In the new algorithm the messages passed over the edges are represented by Gaussian parameters lists, and the decoding algorithm uses the low density lattice codes propagation properties in order to group lists efficiently according to a new criteria. The new algorithm attains essentially the same performance as the quantized decoder, proposed in previous work. The new algorithm advantage in comparison to previous works is its smaller storage requirements and its relatively low computational complexity.
Federated learning is a method of training a global model from decentralized data distributed across client devices. Here, model parameters are computed locally by each client device and exchanged with a central server, which aggregates the local models for a global view, without requiring sharing of training data. The convergence performance of federated learning is severely impacted in heterogeneous computing platforms such as those at the wireless edge, where straggling computations and communication links can significantly limit timely model parameter updates. This paper develops a novel coded computing technique for federated learning to mitigate the impact of stragglers. In the proposed Coded Federated Learning (CFL) scheme, each client device privately generates parity training data and shares it with the central server only once at the start of the training phase. The central server can then preemptively perform redundant gradient computations on the composite parity data to compensate for the erased or delayed parameter updates. Our results show that CFL allows the global model to converge nearly four times faster when compared to an uncoded approach.
The fundamental and natural connection between the infinite constellation (IC) dimension and the best diversity order it can achieve is investigated in this paper. In the first part of this work we develop an upper bound on the diversity order of IC's for any dimension and any number of transmit and receive antennas. By choosing the right dimensions, we prove in the second part of this work that IC's in general and lattices in particular can achieve the optimal diversity-multiplexing tradeoff of finite constellations. This work gives a framework for designing lattices for multiple-antenna channels using lattice decoding. I. INTRODUCTIONThe use of multiple antennas in wireless communication has certain inherent advantages. On one hand, using multiple antennas in fading channels allows to increase the transmitted signal reliability, i.e. diversity. For instance, diversity can be attained by transmitting the same information on different paths between transmitting-receiving antenna pairs with i.i.d Rayleigh fading distribution. The number of independent paths used is the diversity order of the transmitted scheme. On the other hand, the use of multiple antennas increases the number of degrees of freedom available by the channel. In [1],[2] the ergodic channel capacity was obtained for multiple-input multiple-output (MIMO) systems with M transmit and N receive antennas, where the paths have i.i.d Rayleigh fading distribution. It was shown that for large signal to noise ratios (SNR), the capacity behaves as C(SNR) ≈ min(M, N ) log(SNR). The multiplexing gain is the number of degrees of freedom utilized by the transmitted scheme.For the quasi-static Rayleigh flat-fading channel, Zheng and Tse [3] characterized the dependence between the diversity order and the multiplexing gain, by deriving the optimal tradeoff between diversity and multiplexing, i.e. for each multiplexing gain the maximal diversity order was found. They showed that the optimal diversity-multiplexing tradeoff (DMT) can be attained by ensemble of i.i.d Gaussian codes, given that the block length is greater or equal to N + M − 1. For this case, the tradeoff curve takes the form of the piecewise linear function that connects the points (N − l)(M − l), l = 0, 1, . . . , min(M, N ).Space-time codes are coding schemes designed for MIMO systems e.g. see [4], [5] [6] and references therein. The design of space-time codes in these works pursue various goals such as maximizing the diversity order, maximizing the multiplexing gain, or achieving the optimal DMT. El Gamal et al [7] were the first to show that lattice coding and decoding achieve the optimal DMT. They presented lattice space-time (LAST) codes. These space time codes are subsets of an infinite lattice, where the lattice dimensionality equals to the number of degrees of freedom available by the channel, i.e. min(M, N ), multiplied by the number of channel uses. By using a random ensemble of nested lattices, common randomness, minimum mean square error (MMSE) estimation followed by lattice decoding and m...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.