2020
DOI: 10.1109/mcom.001.2000410
|View full text |Cite
|
Sign up to set email alerts
|

From Federated to Fog Learning: Distributed Machine Learning over Heterogeneous Wireless Networks

Abstract: Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices, via iterative local updates (at devices) and global aggregations (at the server). In this paper, we develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions: (i) Network, allowing decentralized cooperation among the devices via device-to-device (D2D) communications. (ii) Heterogeneity, interpreted at three levels: (ii-a) Learning: PSL conside… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
72
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 139 publications
(77 citation statements)
references
References 42 publications
0
72
0
Order By: Relevance
“…is considered to have prime role to support fully autonomous B5G and 6G network, which are well described in [30]. Therefore, investigating multi-agent RL (MARL) environment, where multiple devices interact with each-other and share the already learned parameter using federated learning is an open issue, and future research should study MARL scenarios.…”
Section: Discussionmentioning
confidence: 99%
“…is considered to have prime role to support fully autonomous B5G and 6G network, which are well described in [30]. Therefore, investigating multi-agent RL (MARL) environment, where multiple devices interact with each-other and share the already learned parameter using federated learning is an open issue, and future research should study MARL scenarios.…”
Section: Discussionmentioning
confidence: 99%
“…As the adoption of AI technologies accelerates, the integration of various monitoring and control systems within a centralized cloud can limit the scalability in such systems. Hence, today's predominantly cloud-centric AI solutions that rely on training and inference in the remote cloud have to be complemented by more energy-efficient, partially distributed, and ultimately fully distributed learning mechanisms where numerous devices collaboratively train a part of a global model [14].…”
Section: Open Issues In Employing Ai For 6g-mns a From Centralized To...mentioning
confidence: 99%
“…Conventionally, DNN algorithms are executed in the cloud where training data are preprocessed at the edge before being transferred to the cloud [14]. The edge/fog computing infrastructures are intended to accommodate the needs of multiple DNN models that require locality and persistent training.…”
Section: Open Issues In Employing Ai For 6g-mns a From Centralized To...mentioning
confidence: 99%
“…Implementations of FL over the wireless edge are affected by heterogeneity in communication and computation capabilities across the devices [6]. To improve communication efficiency, several works have focused on reducing the number of uplink/downlink communication rounds by performing multiple iterations of local model updates between consecutive global aggregations [7], [8].…”
Section: A Related Workmentioning
confidence: 99%