2022
DOI: 10.1109/tc.2021.3135752
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation and Optimization of Distributed Machine Learning Techniques for Internet of Things

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 43 publications
(29 citation statements)
references
References 25 publications
1
28
0
Order By: Relevance
“…These operations are not affected by the client order as the local client-side models are aggregated by the weighted averaging method, i.e., FedAvg. Some other SFL versions are available in the literature, but they are developed after and influenced by our approach (Han, amd Jungmoon Lee, and Moon 2021; Gao et al 2021).…”
Section: Variants Of Splitfedmentioning
confidence: 99%
“…These operations are not affected by the client order as the local client-side models are aggregated by the weighted averaging method, i.e., FedAvg. Some other SFL versions are available in the literature, but they are developed after and influenced by our approach (Han, amd Jungmoon Lee, and Moon 2021; Gao et al 2021).…”
Section: Variants Of Splitfedmentioning
confidence: 99%
“…The Conv1-D architecture [ 28 ] was used for the primary experimentation of the ECG time-series dataset. Regarding architecture selection for ECG, the Conv1-D architecture was selected because of its efficiency in dealing with sequential data [ 29 , 30 ]. Another determining factor for selecting the Conv1-D architecture instead of sequential models (such as LSTM, GRU, and RNN) is that there is no effective approach for splitting the sequence model between the client and server in the SFL setting.…”
Section: Datasets and Modelsmentioning
confidence: 99%
“…Therefore, [10] decided to deploy edge servers as coadjutants to alleviate the communication and computation load of the SL server; then each edge server can interact with one or several clients to exchange gradients; while the SL server can further calculate averaged gradients and update the subnetworks at edge servers. [11] put forward similar ideas by deploying multiple FL servers to handle groups of clients. Additionally, they have made comprehensive experiments on Raspberry Pi devices.…”
Section: Related Workmentioning
confidence: 99%
“…For cellular clients, we borrow ideas from [8]- [11], and integrate them into the proposed HFSL. As shown by the right side of Fig.…”
Section: B Architecture Of Hfslmentioning
confidence: 99%
See 1 more Smart Citation