2021
DOI: 10.1016/j.neucom.2021.01.020
|View full text |Cite
|
Sign up to set email alerts
|

A consensus-based decentralized training algorithm for deep neural networks with communication compression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 25 publications
0
4
0
Order By: Relevance
“…The system must first clean the collected data, remove the features that contain default values and a large number of duplicate data, and extract various features that affect the signal coverage quality. Then, the selected features are fed into the prediction model to train the best signal coverage quality prediction model [15], [16]. Specifically, the analysis model shown in Figure 2 is mainly composed of the signal quality prediction model and the base station deployment planning model.…”
Section: Big Data Base Station Traffic Processing Model Integrating C...mentioning
confidence: 99%
“…The system must first clean the collected data, remove the features that contain default values and a large number of duplicate data, and extract various features that affect the signal coverage quality. Then, the selected features are fed into the prediction model to train the best signal coverage quality prediction model [15], [16]. Specifically, the analysis model shown in Figure 2 is mainly composed of the signal quality prediction model and the base station deployment planning model.…”
Section: Big Data Base Station Traffic Processing Model Integrating C...mentioning
confidence: 99%
“…The findings show faster convergence and model correctness than standard optimizations strategies. Complex multidimensional situations may challenge this model [9]. Chen, M., et al (2021) cites.…”
Section: Literature Surveymentioning
confidence: 99%
“…In this context, many algorithms have been published recently. For example in the context of targeting expensive communication problems, the authors in [5] succeeded to well-known FL algorithms, such as FedAvg and FedProx, FedSim emphasizes the significance of these contributions. In [23], the authors proposed training in heterogeneous model aggregation (MHAT) to be able to solve local model problems containing various network architectures.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, to meter the splitting process followed in this case the model will not address issues of FL related to both statistical and system heterogeneity. Expensive communication [5] MNIST [6] FedAvg variants…”
mentioning
confidence: 99%