2021
DOI: 10.48550/arxiv.2112.14387
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Training Time Minimization for Federated Edge Learning with Optimized Gradient Quantization and Bandwidth Allocation

Abstract: Training a machine learning model with federated edge learning (FEEL) is typically time-consuming due to the constrained computation power of edge devices and limited wireless resources in edge networks. In this paper, the training time minimization problem is investigated in a quantized FEEL system, where the heterogeneous edge devices send quantized gradients to the edge server via orthogonal channels. In particular, a stochastic quantization scheme is adopted for compression of uploaded gradients, which can… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…The results of simulations on Jetson TX2 show that this scheme improves DL inference throughput by a factor of 3.3 to 3.8. Liu et al [104] proposed a training time model and an alternating optimization-based algorithm to solve the training time minimization problem in the quantized FEEL system. Experiments show that the optimization algorithm proposed by the authors can approach the optimal performance under different learning tasks and models.…”
Section: ) Model Compressionmentioning
confidence: 99%
“…The results of simulations on Jetson TX2 show that this scheme improves DL inference throughput by a factor of 3.3 to 3.8. Liu et al [104] proposed a training time model and an alternating optimization-based algorithm to solve the training time minimization problem in the quantized FEEL system. Experiments show that the optimization algorithm proposed by the authors can approach the optimal performance under different learning tasks and models.…”
Section: ) Model Compressionmentioning
confidence: 99%
“…In general, the original local model consists of the training weights that are float 32bit, which may be quantized to integer c-bit (1 ≤ c < 32) quantized model [2]. Let b(Q c t (w i,r u ) = (1 + log 2 (c + 1))|w i,r u | represent the volume of the transmitted quantized model of n i u , which is a function of the size of the quantized weights (i.e., |w i,r u |) as well as the bit width c [9]. Therefore, based on Shannon's theorem, the wireless bandwidth used to transmit the local quantized model of UE n i u during the r−th communication round can be given by…”
Section: Communication Modelmentioning
confidence: 99%
“…Note that problem (8) could be equivalent to problem (9), as we can always find the available multipliers λ 1 and λ 2 for problem (9) to approximate the optimal solution of problem (8) [12]. In other words, we can find the optimal solution of problem (8) via problem (9).…”
Section: Aqed Schemementioning
confidence: 99%
See 1 more Smart Citation